Technology
The uproar over Vogue’s AI-generated ad isn’t just about fashion

Sarah Murray recalls the first time she saw an artificial model in fashion: It was 2023, and a beautiful young woman of color donned a Levi’s denim overall dress. Murray, a commercial model herself, said it made her feel sad and exhausted.
The iconic denim company had teamed up with the AI studio Lalaland.ai to create “diverse” digital fashion models for more inclusive ads. For an industry that has failed for years to employ diverse human models, the backlash was swift, with New York Magazine calling the decision “artificial diversity.”
“Modeling as a profession is already challenging enough without having to compete with now new digital standards of perfection that can be achieved with AI,” Murray told TechCrunch.
Two years later, her worries have compounded. Brands continue to experiment with AI-generated models, to the consternation of many fashion lovers. The latest uproar came after Vogue’s July print edition featured a Guess ad with a typical model for the brand: thin yet voluptuous, glossy blond tresses, pouty rose lips. She exemplified North American beauty standards, but there was one problem — she was AI generated.
The internet buzzed for days, in large part because the AI-generated beauty showed up in Vogue, the fashion bible that dictates what is and is not acceptable in the industry. The AI-generated model was featured in an advertisement, not a Vogue editorial spread. And Vogue told TechCrunch the ad met its advertising standards.
To many, an ad versus an editorial is a distinction without a difference.
TechCrunch spoke to fashion models, experts, and technologists to get a sense of where the industry is headed now that Vogue seems to have put a stamp of approval on technology that’s poised to dramatically change the fashion industry.
Techcrunch event
San Francisco
|
October 27-29, 2025
They said the Guess ad drama highlights questions arising within creative industries being touched by AI’s silicon fingers: When high-quality creative work can be done by AI in a fraction of the time and cost, what’s the point of humans? And in the world of fashion, what happens to the humans — the models, photographers, stylists, and set designers — performing those jobs?
“It’s just so much cheaper”
Sinead Bovell, a model and founder of the WAYE organization who wrote about CGI models for Vogue five years ago, told TechCrunch that “e-commerce models” are most under threat of automation.
E-commerce models are the ones who pose for advertisements or display clothes and accessories for online shoppers. Compared to high-fashion models, whose striking, often unattainable looks are featured in editorial spreads and on runways, they’re more realistic and relatable.
“E-commerce is where most models make their bread and butter,” Bovell said. “It’s not necessarily the path to model fame or model prestige, but it is the path for financial security.”

That fact is running in direct contrast to the pressure many brands feel to automate such shoots. Paul Mouginot, an art technologist who has worked with luxury brands, said it’s simply expensive to work with live models, especially when it comes to photographing them in countless garments, shoes, and accessories.
“AI now lets you start with a flat-lay product shoot, place it on a photorealistic virtual model, and even position that model in a coherent setting, producing images that look like genuine fashion editorials,” he told TechCrunch.
Brands, in some ways, have been doing this for a while, he said. Mouginot, who is French, cited the French retailer Veepee as an example of a company that has used virtual mannequins to sell clothes since at least 2013. Other notable brands like H&M, Mango, and Calvin Klein have also resorted to AI models.
Amy Odell, a fashion writer and author of a recently published biography on Gwyneth Paltrow, put it more simply: “It’s just so much cheaper for [brands] to use AI models now. Brands need a lot of content, and it just adds up. So if they can save money on their print ad or their TikTok feed, they will.”
PJ Pereira, co-founder of AI ad firm Silverside AI, said it really comes down to scale. Every conversation he’s had with fashion brands circles around the fact that the entire marketing system was built for a world where brands produced just four big pieces of content per year. Social media and e-commerce has changed that, and now they need anywhere from 400 to 400,000 pieces; it’s too expensive for brands, especially small ones, to keep up.
“There’s no way to scale from four to 400 or 400,000 with just process tweaks,” he added. “You need a new system. People get angry. They assume this is about taking money away from artists and models. But that’s not what I’ve seen.”
From “diverse” models to AI avatars
Murray, a commercial model, understands the cost benefits of using AI models, but only to an extent.

She lamented that brands like Levi’s claim AI is only meant to supplement human talent, not take away.
“If those [brands] ever had the opportunity to stand in line at an open casting call, they would know about the endless amounts of models, including myself, that would dream of opportunities to work with their brands,” she said. “They would never need to supplement with anything fake.”
She thinks such a shift will impact “non-traditional” — think, diverse — commercial models, such as herself. That was the main problem with the Levi’s ad. Rather than hiring diverse talent, it artificially generated it.
Bovell calls this “robot cultural appropriation,” or the idea that brands can just generate certain, especially diverse, identities to tell a brand story, even if the person who created the technology isn’t of that same identity.
And though Pereira argues that it’s unrealistic to shoot every garment on every type of model, that hasn’t calmed the fears many diverse models have about what’s to come.
“We already see an unprecedented use of certain terms in our contracts that we worry indicate that we are possibly signing away our rights for a brand to use our face and anything recognizable as ourselves to train their future AI systems,” Murray said.
Some see generating likenesses of models as a way forward in the AI era. Sara Ziff, a former model and founder of the Model Alliance, is working to pass the Fashion Workers Act, which would require brands to get a model’s clear consent and provide compensation for using their digital replicas. Mouginot said this lets models appear at several shoots on the same day and possibly generate additional income.
That’s “precious when a sought-after model is already traveling constantly,” he continued. But at the same time, whenever an avatar is hired, human labor is replaced. “What few players gain can mean fewer opportunities for many others.”
If anything, Bovell said the bar is now higher for models looking to compete with the distinctive and the digitized. She suggested that models use their platforms to build their personal brands, differentiate themselves, and work on new revenue streams like podcasting or brand endorsements.
“Start to take those opportunities to tell your unique human story,” she said. “AI will never have a unique human story.”
That sort of entrepreneurial mindset is becoming table stakes across industries — from journalism to coding — as AI creates the conditions for the most self-directed learners to rise.
Room for another view

Mouginot sees a world where some platforms stop working with human models altogether, though he also believes humans share a desire for the “sensual reality of objects, for a touch of imperfection and for human connection.”
“Many breakthrough models succeed precisely because of a distinctive trait, teeth, gaze, attitude, that is slightly imperfect by strict standards yet utterly charming,” he said. “Such nuances are hard to erode in zeros and ones.”
This is where startup and creative studio Artcare thrives, according to Sandrine Decorde, the firm’s CEO and co-founder. She refers to her team as “AI artisans,” creative people who use tools like Flux from Black Forest Labs to fine-tune AI-generated models that have that touch of unique humanity.
Much of the work Decorde’s firm does today involves producing AI-generated babies and children for brands. Employing minors in the fashion industry has historically been a gray area rife with exploitation and abuse. Ethically, Decorde argues, bringing generative AI to children’s fashion makes sense, particularly when the market demand is so high.
“It’s like sewing; it’s very delicate,” she told TechCrunch, referring to creating AI-generated models. “The more time we spend on our datasets and image refinements, the better and more consistent our models are.”

Part of the work is building out a library of distinctive artifacts. Decorde noted that many AI-generated models — like the ones created by Seraphinne Vallora, the agency behind Vogue’s Guess ad — are too homogenous. Their lips are too perfect and symmetrical. Their jawlines are all the same.
“Imagery needs to make an impact,” Decorde said, noting that many fashion brands like to work exclusively with certain models, a desire that has spilled over into AI-generated models. “A model embodies a fashion brand.”
Pereira added that his firm combats homogeneity in AI “with intention” and warned that as more content gets made by more people who aren’t intentional, all of the output feeds back into computer models, amplifying bias.
“Just like you would cast for a wide range of models, you have to prompt for that,” he said. “You need to train [models] with a wide range of appearances. Because if you don’t, the AI will reflect whatever biases it was trained on.”
An AI future is promised, but uncertain
The usage of AI modeling technology in fashion is mostly still in its experimental phase, Claudia Wagner, founder of modeling booking platform Ubooker, told TechCrunch. She and her team saw the Guess ad and said it was interesting technically, but it wasn’t impactful or new.

“It feels like another example of a brand using AI to be part of the current narrative,” she told TechCrunch. “We’re all in a phase of testing and exploring what AI can add — but the real value will come when it’s used with purpose, not just for visibility.”
Brands are getting visibility from using AI — and the Guess ad is the latest example. Pereira said his firm recently tested a fully AI-generated product video on TikTok that got more than a million views with mostly negative comments.
“But if you look past the comments, you see that there’s a silent majority — almost 20x engagement — that vastly outnumber the criticism,” he continued. “The click-through rate was 30x the number of complaints, and the product saw a steep hike in sales.”
He, like Wagner, doesn’t think AI models are going away anytime soon. If anything, the process of using AI will be integrated into the creative workflow.
“Some brands feel good about using fully artificial models,” Pereira said. “Others prefer starting with real people and licensing their likeness to build synthetic shoots. And some brands simply don’t want to do it — they worry their audiences won’t accept it.”
Wagner said what is becoming evident is that human talent remains central, especially when authenticity and identity are part of a brand’s story. That’s especially true for luxury heritage brands, which are usually slow to adopt new technologies.
Though Decorde noted many high-fashion brands are quietly experimenting with AI, Mouginot said many are still trying to define their AI policies and are avoiding fully AI-generated people at the moment. It’s one reason why Vogue’s inclusion of an AI model was such a shock.
Bovell pondered if the ad was Vogue’s way of testing how the world would react to merging high fashion with AI.
So far the reaction hasn’t been great. It’s unclear if the magazine thinks it ride out the backlash.
“What Vogue does matters,” Odell said. “If Vogue ends up doing editorials with AI models, I think that’s going to make it okay. In the same way the industry was really resistant to Kim Kardashian and then Vogue featured her. Then it was okay.”
Technology
Pintarnya raises $16.7M to power jobs and financial services in Indonesia

Pintarnya, an Indonesian employment platform that goes beyond job matching by offering financial services along with full-time and side-gig opportunities, said it has raised a $16.7 million Series A round.
The funding was led by Square Peg with participation from existing investors Vertex Venture Southeast Asia & India and East Ventures.
Ghirish Pokardas, Nelly Nurmalasari, and Henry Hendrawan founded Pintarnya in 2022 to tackle two of the biggest challenges Indonesians face daily: earning enough and borrowing responsibly.
“Traditionally, mass workers in Indonesia find jobs offline through job fairs or word of mouth, with employers buried in paper applications and candidates rarely hearing back. For borrowing, their options are often limited to family/friend or predatory lenders with harsh collection practices,” Henry Hendrawan, co-founder of Pintarnya, told TechCrunch. “We digitize job matching with AI to make hiring faster and we provide workers with safer, healthier lending options — designed around what they can reasonably afford, rather than pushing them deeper into debt.”
Around 59% of Indonesia’s 150 million workforce is employed in the informal sector, highlighting the difficulties these workers encounter in accessing formal financial services because they lack verifiable income and official employment documentation.
Pintarnya tackles this challenge by partnering with asset-backed lenders to offer secured loans, using collateral such as gold, electronics, or vehicles, Hendrawan added.
Since its seed funding in 2022, the platform currently serves over 10 million job seeker users and 40,000 employers nationwide. Its revenue has increased almost fivefold year-over-year and expects to reach break-even by the end of the year, Hendrawn noted. Pintarnya primarily serves users aged 21 to 40, most of whom have a high school education or a diploma below university level. The startup aims to focus on this underserved segment, given the large population of blue-collar and informal workers in Indonesia.
Techcrunch event
San Francisco
|
October 27-29, 2025
“Through the journey of building employment services, we discovered that our users needed more than just jobs — they needed access to financial services that traditional banks couldn’t provide,” said Hendrawan. “We digitize job matching with AI to make hiring faster and we provide workers with safer, healthier lending options — designed around what they can reasonably afford, rather than pushing them deeper into debt.”

While Indonesia already has job platforms like JobStreet, Kalibrr, and Glints, these primarily cater to white-collar roles, which represent only a small portion of the workforce, according to Hendrawan. Pintarnya’s platform is designed specifically for blue-collar workers, offering tailored experiences such as quick-apply options for walk-in interviews, affordable e-learning on relevant skills, in-app opportunities for supplemental income, and seamless connections to financial services like loans.
The same trend is evident in Indonesia’s fintech sector, which similarly caters to white-collar or upper-middle-class consumers. Conventional credit scoring models for loans, which rely on steady monthly income and bank account activity, often leave blue-collar workers overlooked by existing fintech providers, Hendrawan explained.
When asked about which fintech services are most in demand, Hendrawan mentioned, “Given their employment status, lending is the most in-demand financial service for Pintarnya’s users today. We are planning to ‘graduate’ them to micro-savings and investments down the road through innovative products with our partners.”
The new funding will enable Pintarnya to strengthen its platform technology and broaden its financial service offerings through strategic partnerships. With most Indonesian workers employed in blue-collar and informal sectors, the co-founders see substantial growth opportunities in the local market. Leveraging their extensive experience in managing businesses across Southeast Asia, they are also open to exploring regional expansion when the timing is right.
“Our vision is for Pintarnya to be the everyday companion that empowers Indonesians to not only make ends meet today, but also plan, grow, and upgrade their lives tomorrow … In five years, we see Pintarnya as the go-to super app for Indonesia’s workers, not just for earning income, but as a trusted partner throughout their life journey,” Hendrawan said. “We want to be the first stop when someone is looking for work, a place that helps them upgrade their skills, and a reliable guide as they make financial decisions.”
Technology
OpenAI warns against SPVs and other ‘unauthorized’ investments

In a new blog post, OpenAI warns against “unauthorized opportunities to gain exposure to OpenAI through a variety of means,” including special purpose vehicles, known as SPVs.
“We urge you to be careful if you are contacted by a firm that purports to have access to OpenAI, including through the sale of an SPV interest with exposure to OpenAI equity,” the company writes. The blog post acknowledges that “not every offer of OpenAI equity […] is problematic” but says firms may be “attempting to circumvent our transfer restrictions.”
“If so, the sale will not be recognized and carry no economic value to you,” OpenAI says.
Investors have increasingly used SPVs (which pool money for one-off investments) as a way to buy into hot AI startups, prompting other VCs to criticize them as a vehicle for “tourist chumps.”
Business Insider reports that OpenAI isn’t the only major AI company looking to crack down on SPVs, with Anthropic reportedly telling Menlo Ventures it must use its own capital, not an SPV, to invest in an upcoming round.
Technology
Meta partners with Midjourney on AI image and video models

Meta is partnering with Midjourney to license the startup’s AI image and video generation technology, Meta Chief AI Officer Alexandr Wang announced Friday in a post on Threads. Wang says Meta’s research teams will collaborate with Midjourney to bring its technology into future AI models and products.
“To ensure Meta is able to deliver the best possible products for people it will require taking an all-of-the-above approach,” Wang said. “This means world-class talent, ambitious compute roadmap, and working with the best players across the industry.”
The Midjourney partnership could help Meta develop products that compete with industry-leading AI image and video models, such as OpenAI’s Sora, Black Forest Lab’s Flux, and Google’s Veo. Last year, Meta rolled out its own AI image generation tool, Imagine, into several of its products, including Facebook, Instagram, and Messenger. Meta also has an AI video generation tool, Movie Gen, that allows users to create videos from prompts.
The licensing agreement with Midjourney marks Meta’s latest deal to get ahead in the AI race. Earlier this year, CEO Mark Zuckerberg went on a hiring spree for AI talent, offering some researchers compensation packages worth upwards of $100 million. The social media giant also invested $14 billion in Scale AI, and acquired the AI voice startup Play AI.
Meta has held talks with several other leading AI labs about other acquisitions, and Zuckerberg even spoke with Elon Musk about joining his $97 billion takeover bid of OpenAI (Meta ultimately did not join the offer, and OpenAI denied Musk’s bid).
While the terms of Meta’s deal with Midjourney remain unknown, the startup’s CEO, David Holz, said in a post on X that his company remains independent with no investors; Midjourney is one of the few leading AI model developers that has never taken on outside funding. At one point, Meta talked with Midjourney about acquiring the startup, according to Upstarts Media.
Midjourney was founded in 2022 and quickly became a leader in the AI image generation space for its realistic, unique style. By 2023, the startup was reportedly on pace to generate $200 million in revenue. The startup sells subscriptions starting at $10 per month. It offers pricier tiers, which offer more AI image generations, that cost as much as $120 per month. In June, the startup released its first AI video model, V1.
Techcrunch event
San Francisco
|
October 27-29, 2025
Meta’s partnership with Midjourney comes just two months after the startup was sued by Disney and Universal, alleging that it trained AI image models on copyrighted works. Several AI model developers — including Meta — face similar allegations from copyright holders, however, recent court cases pertaining to AI training data have sided with tech companies.
Got a sensitive tip or confidential documents? We’re reporting on the inner workings of the AI industry — from the companies shaping its future to the people impacted by their decisions. Reach out to Rebecca Bellan at [email protected] and Maxwell Zeff at [email protected]. For secure communication, you can contact us via Signal at @rebeccabellan.491 and @mzeff.88.
We’re always looking to evolve, and by providing some insight into your perspective and feedback into TechCrunch and our coverage and events, you can help us! Fill out this survey to let us know how we’re doing and get the chance to win a prize in return!
-
Crypto News2 days ago
AI the New Tech Stack
-
Entertainment2 days ago
Oprah’s Roomy Cargo Pants Are a Go-To Fall Style
-
Entertainment2 days ago
Prince Harry Made Secret Visit to Another Member of Royal Family During U.K. Trip
-
Crypto News2 days ago
Russian Finance Minister: Ruble Is “Strong,” Enhances Budget Traceability
-
Crypto News2 days ago
Kalshi Outpaces Polymarket in Prediction Market Volume Amid Surge in U.S. Trading
-
Business2 days ago
The End Of The Commercial Real Estate Recession Is Finally Here
-
News2 days ago
Police kill carjacking suspect outside In-N-Out Burger in Laguna Hills
-
Entertainment2 days ago
Karlie Kloss welcomes third baby with husband Joshua Kushner