Jonathan Rubenstein Is Not Here To Fake It

AI

 

TECH

Rubenstein is staking his future on something radical: real people

10 MIN READ TIME

 
 

WORDS BY caleb Church

PHOTOGRAPHY COURTESY OF FACE FWD

 
 

It started with a cigar, a birthday toast, and a conversation about the future of acting. The SAG-AFTRA strike was in full swing, and one of its fiercest battlegrounds was AI. Could a studio clone an actor’s face and voice for a sequel, without their consent? Should they? Jonathan Rubenstein didn’t just think it was a problem—he saw it as a turning point. That night, bourbon in hand, he scribbled the first notes of what would become FaceFWD.ai: a platform built to protect digital identity in an era when authenticity is constantly up for debate. 

In a world increasingly populated by synthetic avatars, composite voices, and algorithmic performances, Rubenstein is staking his future on something radical: real people. FaceFWD.ai is a creator-first marketplace that lets individuals license their likeness for AI-generated content—on their terms. Not only does it give talent control over where and how their image appears, it requires their explicit approval before anything goes live. And if a brand crosses a line? There’s a kill switch. 

To understand why this matters, you have to understand the difference between authentic and inauthentic AI. Synthetic avatars—those dreamed up entirely by machine—are fictional by nature. They might be convincing, but they carry no context, no history, no lived perspective. Authentic avatars, by contrast, are grounded in real people with real values, reputations, and relationships. FaceFWD is Rubenstein’s answer to a question more consumers should be asking: who’s really behind the content I’m seeing? 

Rubenstein’s path to this moment has been anything but artificial. A seasoned digital operator with experience at BET, the NHL, and Disney+, he’s launched global platforms, led high-stakes marketing ops, and managed the chaos of influencer campaigns at scale. He’s survived media mergers, built teams from scratch, and earned a reputation as a sharp thinker with a get-it-done ethos. His resume reads like a crash course in digital transformation. 

With FaceFWD, Rubenstein is putting all that experience to work in service of something more personal: giving creators back their power. 

I sat down with Rubenstein to trace the journey from concept to company, explore the challenges of building ethical AI tools, and hear firsthand what it means to protect authenticity in a synthetic world. 

 
 
 

CC: Jonathan, it's great to connect. Before we dive into the deep end of AI and digital identity, tell us a bit about your journey. You’ve had a fascinating career arc through major media landscapes. How did those experiences at BET, the NHL, and Disney+ shape your approach to what you're building now with FaceFWD? 

JR: It's been a journey of learning and adapting, that's for sure. At BET, I really cut my teeth on digital media from the ground up—everything from audience engagement to ad inventory, figuring out what makes content truly perform. That's where I saw firsthand how big stars could move the needle and how crucial engagement was. Then, at the NHL, it was all about systems and operations. I rebuilt the digital operations team from scratch, which taught me a ton about building teams, robust workflows, and the importance of a strong backend. That company, BAMTech, was actually born out of Major League Baseball Advanced Media and was later acquired by Disney to build out ESPN+ and Disney+.  

Moving over to Disney+ was a whole different scale, a global view. I was involved in launching in over 30 countries and supporting marketing ops for Disney+, ESPN+, and Hulu. It was an incredible experience. But one thing that consistently stood out, no matter how big the brand, was influencer marketing. It was always slow, often messy, and incredibly expensive. Even with the behemoth of Disney behind you, it was still hard. That's a big part of what FaceFWD aims to streamline.  

CC: That idea of streamlining and solving complex problems seems to be a core part of your DNA. The story of FaceFWD.ai’s inception—bourbon, cigars, the SAG-AFTRA strike—paints a vivid picture. Walk me through that "aha!" moment. What specifically clicked for you that night? 

JR: Yeah, it was one of those nights. My neighbor and I were celebrating his birthday, and naturally, the conversation turned to the actors’ strike, which was in full swing. A huge sticking point, a real battleground, was AI—specifically, the idea that a studio could clone an actor’s face and voice, potentially for a sequel or other projects, without their explicit consent or further compensation. It just hit me how wild that was. With AI technology advancing so rapidly, this wasn't a niche issue; it was on the verge of becoming a problem for everyone. It’s not just celebrities who have digital footprints anymore; we all do, and that presence can be replicated. It dawned on me that in this age of AI and even blockchain, there’s just no excuse for anyone’s likeness to be used without their notification, approval, and proper compensation.

  

 

CC: So, from that realization, how did FaceFWD start to take shape? What’s the core mission you landed on? 

JR: Control and consent. That’s the bedrock. At its heart, FaceFWD is a digital marketplace. A creator can upload their avatar—an AI-generated version of themselves that they create through our platform—and then they set all the terms: pricing, how long it can be used, what categories it can’t be used for. They are completely in charge. A brand might want to purchase the right to create content with that avatar, but the creator always, always has final approval. If the content’s tone, script, or brand association doesn’t feel right, they hit reject. They can even pull the plug mid-process if needed.  

 
 

CC: That approval workflow sounds absolutely vital, especially with the potential for misuse. You've mentioned a "kill switch" – how does that empower creators? 

JR: It's fundamental. Every piece of video content created is watermarked and can’t be downloaded until the creator gives it the green light. This protects everyone involved. The creator’s image isn't getting out there in a way they haven't sanctioned, and the brand knows they’re getting content that’s ethically sourced and cleared for use. We’re trying to build an infrastructure of mutual respect. If a brand crosses a line or the creator simply isn't comfortable, that kill switch allows them to cancel the deal. Money gets refunded, no questions asked, because the platform is built with talent as the priority.  

CC: You're drawing a clear line between "authentic" and "inauthentic" AI avatars. In a world getting flooded with synthetic media, why is this distinction so critical? 

JR: It’s about accountability and real connection. Authentic avatars are grounded in real people—they have identities, values, reputations, and existing audiences. It’s your digital twin, for all intents and purposes. An inauthentic avatar, on the other hand, is a composite, a complete fiction dreamed up by a machine. It might look human, but it has no history, no lived perspective, no one real behind it. When brands start using purely synthetic, unaccountable avatars to push products or messages, who’s really behind that content? With authentic avatars, there’s a real person. You’re borrowing their face, their voice, their credibility, and that inherently comes with responsibility.  

CC: It’s interesting you mention problem-solving. You’ve described yourself as someone who “loves to solve puzzles and hates losing.” How does that mindset translate when you're faced with the complexities of building something entirely new like FaceFWD, especially in a field as nascent and debated as AI ethics? 

JR: That’s definitely my personality. I’m overly competitive about trivial things sometimes – I once won a 50-person amateur poker tournament not because I'm a poker whiz, but because I just refused to give up and outlasted everyone! But seriously, when a challenge is in front of me, I dig in. My approach is pretty straightforward: one step at a time. If something’s broken or I need to figure something out, my first step is often Google. Read, experiment, build a prototype. You don’t need all the answers on day one; you need to keep moving forward and break the problem down into its simplest form.  

I remember at BET, my boss Bobby Singh—still one of my favorite bosses—would give me these challenges. He’d say, "Jonathan, we need to figure out VOD. Go figure it out." And at first, I was like, "What?!" But he empowered me TO dive in, research, and present how I thought it should work. He called me his "favorite utility player" because he could point me at a problem, and I’d go fix it. That’s when it really dawned on me how much I loved that process. The perception of complexity often scares people off, but if you break it down, it’s usually more digestible.  

CC: That’s a great insight. So, with FaceFWD, you initially thought about blockchain for tracking likeness, right? But then you pivoted. What prompted that shift in strategy? 

JR: Exactly! The initial "aha!" was, why don't we put everyone's likeness on the blockchain? It seemed like a clear way to track everything. But then I started researching how to actually do that, and honestly, it was way too complicated for a starting point. It was a steep learning curve, and I realized I might never get there in time if I stuck to that rigidly. So, I took my own advice: let's back this up and simplify. Blockchain might be an end goal, but at a baseline, we need a robust contracting mechanism and a clear approval mechanism. There are usually simpler ways to get the rubber on the road early.  

 
 

CC: You’re clearly passionate about putting creators in the driver's seat. But AI-generated content often carries a stigma of being inherently "fake." What kind of cultural shift are you hoping FaceFWD can spark in how we perceive authenticity in this new digital landscape? 

JR: That’s a huge part of it. I want people to stop treating AI content as inherently fake or valueless. If it’s made with consent, if the person whose likeness is used genuinely stands behind the message, then it is real. An AI avatar of a creator talking about a brand they genuinely love can be just as authentic, maybe even more so, than a traditionally shot commercial where an actor is just reading lines. Why does AI-generated video of you talking about how much you love, say, lava lamps have to be seen as not real, if you genuinely love lava lamps? Just because it’s made with AI doesn’t mean it's a lie or "slop." It doesn't have to be. We need to rethink what authenticity truly means in this era. 

CC: Looking ahead, then, what's your dream outcome for FaceFWD? If you achieve everything you're setting out to do, what does that future look like? 

JR: My dream is that FaceFWD becomes the standard for creating and licensing AI content. That we cultivate an ecosystem where creators are fundamentally protected, brands are respected, and the content we all consume genuinely reflects the consent and the true intent of the people in it. That would be the ultimate win. We’re at a tipping point: either creators take control of their digital selves, or they risk getting left behind in a world where anyone can essentially copy and paste their identity. We're offering a legitimate, ethical pathway.  

CC: Jonathan, this has been incredibly insightful. Thanks for breaking it all down. 

JR: Thanks for having me, Caleb. Great conversation. 

EDITOR’S NOTE:

THIS TRANSCRIPT HAS BEEN EDITED FOR BREVITY.

MORE STORIES ➤

 

Next
Next

What Does The World’s Oldest Single Malt Have In Common With a Visionary Architect?