RESEARCH PROJECTS
with Puneet Manchanda and Eric Schwartz
Free ad-supported streaming of on-demand content is growing. Platforms that provide this service need to better understand how ad delivery affects the consumption experience – do viewers zap (leave a program incomplete) during immediate ad exposure or subsequent (program) content exposure? Using debiased machine learning with Hausman instruments, we capture the causal effect of four levers of ad delivery on immediate ad zapping and subsequent content zapping behavior. Our results find that on average, an increase in number of pods (ad breaks), length of pods or repetition of ads, results in a larger increase in subsequent content zapping than immediate ad zapping. On the other hand, an increase in spacing till the next pod, results in a larger decrease in immediate ad zapping than subsequent content zapping. We also investigate differences in effects across sub-types of zapping: (a) switching to another episode of same TV show, (b) switching to new TV show or movie and (c) stop watching. We find that the platform can face tradeoffs between preventing ad zapping or content zapping for some ad delivery levers based on whether the platform wants to promote stickiness to the content or to the platform.
Keywords: Streaming Platforms, Ad Delivery, Zapping, Causal Inference, Debiased Machine Learning
UNBOXING ENGAGEMENT IN YOUTUBE INFLUENCER VIDEOS: AN ATTENTION-BASED APPROACH
with Puneet Manchanda
​
Influencer marketing videos have surged in popularity, yet significant gaps remain in understanding the relationship between video features and engagement. This challenge is intensified by the complexities of interpreting unstructured data. While deep learning models effectively leverage unstructured data to predict business outcomes, they often function as black boxes with limited interpretability, particularly when human validation is hindered by the absence of a known ground truth. To address this issue, the authors develop an “interpretable deep learning framework” that not only makes good out-of-sample predictions using unstructured data but also provides insights into the captured relationships. Inspired by visual attention in print advertising, the interpretation approach uses measures of model attention to video features, eliminating spurious associations through a two-step process and shortlisting relationships for formal causal testing. This method is applicable across well-known attention mechanisms—additive attention, scaled dot-product attention, and gradient-based attention—when analyzing text, audio, or video image data. Validated using simulations, this approach outperforms benchmark feature selection methods. This framework is applied to YouTube influencer videos, linking video features to measures of shallow and deep engagement developed based on the dual-system framework of thinking. The findings guide influencers and brands in prioritizing video features associated with deep engagement.
Keywords: Influencer Videos, Interpretable Deep Learning, Social Media Engagement, Unstructured Data Analysis, Attention-based Models