Rosanna Pansino has been sharing her baking creations with the web for over 15 years, hoping to thrill and encourage with enjoyable creations that embrace a Star Wars Death Star cake and holographic chocolate bars. However in her newest sequence, she has a brand new aim: “Kick AI’s butt.”
Blame it on the AI slop overwhelming her social media feeds. Pansino used to see posts from actual bakers and associates; now, they’re being crowded out by AI-generated clips. There’s a complete style of slop movies that function meals, together with a weird development of unlikely objects being unfold “satisfyingly” on toast.
She determined to do one thing about it. She would put her years of ability side-by-side with AI to recreate these slop movies in actual life.
As an example: a pile of bitter gummy Peach Rings, effortlessly smeared on toast. The AI video appeared easy sufficient, however Pansino wanted to create one thing fully new. She used butter as her base, infused with peach-flavored oil. Yellow and orange meals coloring gave it the fitting pastel hues. She rigorously piped the butter into rings utilizing a silicone mould. After they hardened within the freezer, she used uncolored butter to connect two rings collectively in the fitting 3D form. The ultimate contact was to dunk them in a combination of sugar and citric acid for that bitter sweet look and style.
It labored. The butter rings have been good replicas of actual sweet rings, and Pansino’s video paralleled the AI model precisely, with the rings easily gliding throughout the toast. Most significantly, she had carried out what she got down to do.
“The web is flooded with AI slop, and I needed to discover a method to struggle again towards it in a enjoyable manner,” Pansino tells me.
It is a uncommon victory for people as AI-generated slop inundates a web based world that had, as soon as upon a time, been constructed by people for people.
AI technology has been working behind the scenes on the web for years, typically in unnoticeable methods. Then, a couple of years in the past, generative AI burst onto the scene, launching a change that has unfolded at breakneck pace. With it got here a flood of AI slop, a time period given to notably lukewarm AI-generated textual content, pictures and movies which can be inescapable on-line, from engines like google to publishing and social media.
“AI slop” is a shabby imitation of content material, typically a pointless, careless regurgitation of current info. It is error-prone, with summaries proudly proclaiming made-up info and papers citing pretend credentials. Photographs are likely to have a slick, plastic veneer, whereas brainrot movies battle to obey primary legal guidelines of physics. Suppose fake bunnies on trampolines and AI Overviews advising you to put glue on pizza.
The overwhelming majority of US adults who use social media (94%) imagine they see AI-generated content material when scrolling, a new CNET study discovered. Solely 11% discovered it entertaining, helpful or informative.
Slop occurs as a result of AI makes it faster, simpler and cheaper than ever to create content material at an unimaginable scale. OpenAI’s Sora, Google’s Nano Banana and Meta AI create movies, pictures and textual content with a couple of clicks of a button.
Consultants have loudly voiced issues about AI’s affect on the environment, the economy, the workforce, misinformation, children and other vulnerable folks. They’ve cited its means to further bias, supercharge scams and hurt human creativity, however nothing has slowed down the speedy adoption and scaling of AI. It is overtaking the human creators, artists and writers whose work fuels the very existence of those fashions.
AI slop is an oil spill in our digital oceans, however there are lots of people working to wash it up. Many are combating for higher methods to establish and label AI content material, from memes to deepfakes. Creators are pushing for higher media literacy and altering how we eat media. Publishers, scientists and researchers are testing new methods to maintain dangerous info from gaining traction and credibility. Builders are constructing havens from slop with AI-free on-line areas. Laws and regulation, or the dearth of it, play a task in every potential answer.
We can’t ever be utterly rid of AI, however all these efforts are bringing some humanity again to the web. Pansino’s recreations of AI movies spotlight the painstakingly detailed exhausting work that goes into creation, far more than typing a immediate and clicking generate.
“Human creativity is likely one of the most necessary issues we’ve on the earth,” says Pansino. “And if AI drowns that out, what do we’ve left?”
Creators who push again: ‘AI might by no means’
The web was constructed on movies similar to Charlie Bit My Finger, Grumpy Cat and the Evolution of Dance. Now, we’ve movies of AI-generated cats forming a feline tower and “Feel the AGI” memes. These innocuous AI posts are why some individuals on social media see slop as leisure or a new kind of internet culture. Even when movies are very clearly AI, individuals do not at all times thoughts in the event that they’re perceived as innocent enjoyable. However slop isn’t benign.
You see slop as a result of it is being compelled upon you — not since you’ve indicated to the algorithms that you simply like it. Should you have been to join a brand new YouTube account at this time, a 3rd of the primary 500 YouTube Shorts proven to you’ll be some type of AI slop content material, in response to a report from Kapwing, a maker of on-line video instruments. There are over 1.3 billion movies labeled as AI-generated on TikTok as of February. Slop is baked into our scrolling the identical manner microplastics are a default ingredient in our meals.
Pansino compares her expertise recreating AI meals slop movies to an episode of The Workplace. In it, Dwight is competing with the corporate’s new web site to see if he could make extra gross sales.
“Dwight, single-handedly, is outselling the web site — he is competing towards the machine,” Pansino says. “That is what I really feel like after I’m baking towards AI. It is a good rush.”
(The Workplace followers could recall that Dwight wins on the finish of the episode, and later, due to large errors and fraud, the location’s creator, Ryan, is fired.)
Her 21 million-plus followers throughout YouTube, Instagram and TikTok have cheered on her AI recreation sequence, which Pansino attributes to their very own frustrations with seeing slop on their feeds. Plus, her creations are literally edible.
“We’re getting dimensions that AI might by no means,” she says.
Different creators have emerged as “actuality checkers.” Jeremy Carrasco (@showtoolsai) makes use of his background as a technical video producer to debunk viral AI movies. His group would livestream occasions for companies, working to keep away from errors, which has helped him extra simply spot when AI erroneously mimics video qualities similar to lens flares. His academic movies assist his greater than 870,000 Instagram, YouTube and TikTok followers acknowledge these abnormalities.
Analyzing a video’s context, Carrasco factors out telltale indicators of generative AI similar to bizarre soar cuts and continuity points. He additionally finds the primary time a video was shared by an actual particular person or a slop account. Everybody can do that, nevertheless it’s exhausting once you’re being “emotionally baited” by slop, Carrasco says.
“Most individuals aren’t spending their time analyzing movies like I’m. So if it hits their unconscious [signaling], ‘This appears to be like actual,’ their mind would possibly shut off there,” Carrasco says.
Slop producers don’t desire you to second-guess what you are seeing. They need you to get emotional — whether or not that is delighted by bunnies on a trampoline or outraged by political memes — and to argue within the feedback and share the movies with your folks. The aim for a lot of producers of AI slop is engagement and, subsequently, monetization. The Kapwing report estimates the highest slop accounts are pulling in hundreds of thousands of {dollars} of advert revenue per 12 months. They’re similar to the unique engagement farmers and ragebaiters on Twitter. What’s previous is now AI-powered.
Seeing is not believing. What now?
It may be tough for the net platforms we depend on to establish AI pictures and movies. To weed out the worst offenders, the accounts that mass-produce sloppy spam, some platforms encourage their actual customers so as to add verifications to their accounts. LinkedIn has had some success right here, with over 100 million of its members including these new verifications. However AI makes it exhausting to maintain up.
Individuals are utilizing AI-powered group automation instruments to make AI-generated posts and go away feedback throughout tons of of random accounts in a fraction of the time it will take to take action manually. Teams of those customers are known as engagement pods, Oscar Rodriguez, vice chairman of belief merchandise at LinkedIn, tells me. The corporate has eliminated “tons of of LinkedIn teams” that show these engagement-farming behaviors in simply the previous few months, however figuring out them is difficult.
“There is no such thing as a one sign that I can inform you that positively makes [an account] inauthentic or pretend, nevertheless it’s a mix of various alerts, the conduct of the accounts,” says Rodriguez.
Take AI-generated pictures, for instance. Many individuals use AI to create new headshots to keep away from paying for pricey photoshoots, and it isn’t towards LinkedIn’s guidelines to make use of them as profile photos. So an AI headshot alone is not sufficient to warrant suspicion. But when an account has an AI profile photograph and has different warning indicators — like commenting extra ceaselessly than LinkedIn internally is aware of is typical for human customers — that raises purple flags, Rodriguez says.
To identify AI content material, platforms depend on labeling and watermarking. Labeling requires individuals to reveal that their work was made with AI. Should you do not, monitoring methods can try to flag it themselves. One of many strongest alerts these methods depend on is watermarks, that are invisible signatures utilized throughout content material creation and hidden in a chunk of content material’s metadata. They provide you extra details about how and when one thing was created.
Most watermarking methods deal with two areas: {hardware} corporations authenticating actual content material because it’s captured, and AI corporations embedding alerts into their artificial, AI-generated media when it is created. The Coalition for Content Provenance and Authenticity is a serious advocacy group making an attempt to standardize how artificial media is watermarked with content credentials.
Many, however not all, AI fashions are suitable with the C2PA’s framework. Which means its verification software cannot flag every bit of AI-generated media, which creates inconsistency and confusion. Half of US social media customers (51%) need higher labeling, CNET discovered. That is why different options are within the works to fill the gaps.
Abe Davis, a pc science professor at Cornell College, led a group that developed a method to embed watermarks in mild. All that is wanted is to activate a lamp that makes use of the required chip to run the code. This process is known as noise-coded illumination. Any digicam that captures video footage of an occasion the place the sunshine is shining will robotically add the watermark.
“As an alternative of making use of the watermark to information that is captured by a particular digicam, [noise-coded illumination] applies it to the sunshine surroundings. Any digicam that is recording that mild goes to file the watermark,” Davis says.
The watermark is hidden within the mild’s frequencies, unfold throughout a video, undetectable to the human eye and tough to take away. These with the key code can decode the watermark and see what components of a video or picture have been manipulated, all the way down to the pixel degree. This may be particularly useful for dwell occasions, like political rallies and press conferences, the place the audio system are targets for deepfakes.
Although it is not but commercially obtainable, the analysis reveals the completely different alternatives so as to add an additional layer of safety from AI. Watermarking is a sort of collective motion drawback, Davis says. Everybody would profit if we carried out all these approaches, however nobody particular person advantages sufficient. That is why we’ve haphazard efforts unfold throughout a number of industries which can be extremely aggressive and quickly altering.
Labeling and watermarking are necessary instruments within the struggle towards slop, however they will not be sufficient on their very own. Merely having AI labeled would not cease it from filling our lives. However it’s a vital first step.
Publishing pains
Should you suppose it is simpler to single out AI-generated textual content than pictures or movies, suppose once more. Publishing is likely one of the greatest targets of AI slop after social media. Chatbots and Google’s AI Overviews eat up articles from news sources and other digital publications and spit out wonky and doubtlessly copyright-infringing outcomes. AI-powered translation and record-keeping instruments threaten the work of translators and historians, however the tech’s superficial understanding of cultures and nuances makes it a poor substitute.
Slop is very pervasive in tutorial publishing. In a “publish or perish” tradition like academia, a few of it could be unintentionally or mistakenly created, particularly by first-time researchers and writers. But it surely’s slipping into the mainstream journals, like a now-retracted examine that went viral for together with an clearly incorrect, overly phallic AI-generated picture of a rat’s reproductive system with many typos. That is one instance, albeit a hilarious and simply recognizable one, of how AI is turbocharging dangerous analysis, notably for corporations that promote pretend analysis to tutorial publishers, generally known as paper mills.
The revered and extensively used prepublication database arXiv is likely one of the greatest targets for AI slop. Editorial director Ramin Zabih and scientific director Steinn Sigurdsson inform me that submissions usually improve about 20% annually; now, it is getting “worrisomely quicker,” Zabih says. AI is responsible, they are saying.
ArXiv will get round 2,000 submissions a day, half of that are revisions. It has automated screening instruments to weed out probably the most clearly fraudulent or AI-generated research, nevertheless it closely depends on tons of of volunteers who evaluate the remaining papers in response to their areas of experience. It is also needed to tighten its submission tips, adopting an endorsement system to make sure solely actual individuals can share analysis. It isn’t an ideal repair, Sigurdsson acknowledges, nevertheless it’s essential to “stem the flood” of scientific slop.
“The corpus of science is getting diluted. Loads of the AI stuff is both actively flawed or it is meaningless. It is simply noise,” says Sigurdsson. “It makes it tougher to seek out what’s actually occurring, and it will possibly misdirect individuals.”
There’s been a lot slop that one analysis group used these fraudulent papers to construct a machine learning tool that may acknowledge it. Adrian Barnett, a statistician and researcher at Queensland College of Know-how, was a part of the group that used retracted journal papers to coach a language mannequin to identify pretend and doubtlessly AI-generated research, particularly for most cancers analysis, sadly a excessive goal space.
Paper mill-created articles “have the illusion of a paper,” Barnett says. “They know what a paper ought to appear like, after which they spin the wheel. They may change the illness, they’re going to change a protein, they’re going to change a gene and presto, you have received a brand new paper.”
The software acts as a sort of scientific spam filter. It identifies patterns, like generally used phrases, within the templates that chatbots and human fabricators depend on to imitate academia’s fashion. It is one instance of how AI expertise itself is getting used to struggle slop — AI versus AI, in lots of circumstances. However like different AI verification instruments, it is restricted; it will possibly solely establish the templates it was educated on. That is why human oversight is very necessary.
People have intestine instincts and material experience that AI would not. For instance, arXiv’s moderators flagged a pretend sequence of submissions as a result of the authors’ names caught out to them as too stereotypically British, like characters from Jane Eyre. However the demand for human evaluations results in threat of a “loss of life spiral,” Zahib mentioned, the place reviewers’ workloads get bigger and extra disagreeable, which causes them to cease reviewing, including stress to remaining reviewers.
“There is a little bit of an arms race between writing [AI] content material and instruments for robotically figuring out it,” Zahib says. “However at this cut-off date, I hate to say this, it is a battle we’re dropping slowly.”
Can there be a protected haven from slop?
A part of the issue with slop — if not your entire drawback — is that the handful of corporations that run our on-line lives are additionally those constructing AI. Meta slammed its AI into Instagram and Fb. Google built-in Gemini into each section of its huge enterprise, from search to smartphones. X is virtually inseparable from Grok. It’s totally tough, and in some circumstances unimaginable, to show off AI on sure gadgets and websites. Tech giants say they’re including AI to enhance our expertise. However which means they’ve a fairly large battle of curiosity on the subject of reining in slop.
They’re determined to show their AI fashions are wanted and work effectively. We are the guinea pigs used to inflate their utilization stats for his or her quarterly investor conferences. Whereas some corporations have launched instruments to assist take care of slop, it isn’t practically sufficient. They don’t seem to be overly excited by serving to clear up the issue they created.
“You can’t separate the platforms from the individuals making the AI,” Carrasco says. “Do I belief [tech companies] to have the fitting compass about AI? No, in no way.”
Meta and TikTok declined to touch upon the file about efforts to rein in AI-generated content material. YouTube spokesperson Boot Bullwinkle mentioned, “AI is a software for creativity, nevertheless it’s not a shortcut for high quality,” and that to prioritize high quality experiences, the corporate is “much less more likely to suggest low-quality or repetitive content material.”
Different corporations are swerving in the other way. DiVine is one of some AI-free social media apps, a reimagining of Vine, the short-lived quick video service that predated TikTok. Created by Evan Henshaw-Plath, with funding from Twitter creator Jack Dorsey, the brand new video app will embrace an archive of over 10,000 Vines from the unique app — no want to seek out these Vine compilations on YouTube. It is an interesting mix of nostalgia for a less-complicated web and another actuality the place slop hasn’t taken over.
“We’re not anti-AI,” DiVine chief advertising officer Alice Chan says. “We simply suppose that folks deserve a spot they’ll come the place there is a excessive degree of belief that the content material they’re seeing is actual and made by actual individuals.”
To maintain AI movies off the platform, the corporate is working with The Guardian Venture to make use of its identification system known as proof mode, constructed on high of the C2PA framework, that verifies human-created content material. It additionally plans to work with AI labs to “design checks … that have a look at the underlying construction of those movies,” Henshaw-Plath mentioned in a podcast earlier this 12 months. DiVine customers can even be capable of report in the event that they see AI movies, although it will not permit video uploads when it launches, which ought to assist forestall slop from slipping by means of.
Authenticity issues now greater than ever, and social media executives realize it. On New Yr’s Eve, Instagram chief Adam Mosseri wrote a prolonged post about needing to return to a “uncooked” and “imperfect” aesthetic, criticizing AI slop and defending AI use in the identical paragraph. YouTube CEO Neal Mohan began 2026 with a letter explicitly stating slop is a matter and that platforms should be “decreasing the unfold of low-quality, repetitive content material.”
But it surely’s exhausting to think about platforms like Instagram and YouTube will be capable of return to a very people-centric, genuine and actual tradition so long as they depend on algorithmic curation of really useful content material, push AI options and permit individuals to share fully AI-generated posts. Apps like Vine, which by no means demanded perfection or developed AI, might need a combating probability.
Slopaganda and the messy net of AI in politics
AI is an influence participant in politics, accountable for creating a robust new aesthetic and influencing opinions, culminating in what’s known as slopaganda — AI content material particularly shared to govern beliefs to realize political ends, as one early study places it.
AI is already an efficient software for influencing our beliefs, in response to a recent Stanford University study. Researchers needed to know whether or not individuals might establish political messages written by AI and measure how efficient they’re in influencing beliefs. When studying an AI-created message, the overwhelming majority of respondents (94%) could not inform. These AI-generated political messages have been additionally as persuasive as these written by people.
“It is fairly tough to craft these persuasive messages in a manner that resonates with individuals,” says Jan Voelkel, one of many examine’s authors. “We thought this was fairly a excessive bar for big language fashions to realize, and we have been shocked by the truth that they have been already doing so effectively.”
It isn’t essentially a foul factor that AI can craft influential political messages when carried out responsibly. However AI can be utilized by dangerous actors to unfold misinformation, Voelkel says. The chance is that one-person misinformation groups can use AI to sway individuals’s opinions whereas working extra effectively than earlier than.
A technique we see the affect and normalization of slop in politics is with imagery. AI memes are a brand new sort of political commentary, as demonstrated by President Donald Trump and his administration: The White Home’s AI image of a woman crying while being deported; Trump’s AI cartoon video of himself wearing a crown and flying a fighter jet after nationwide “No Kings” protests; Protection Secretary Pete Hegseth’s parody guide cowl of Franklin the Turtle holding a machine gun capturing at international boats; an AI-edited image that altered a girl’s face to look as if she was crying after being arrested for protesting Immigration and Customs Enforcement.
Governments have the facility to find out whether or not and regulate AI. However legislative efforts have been haphazard and scattered. Particular person states have taken motion, as within the case of California’s AI Transparency Act, Illinois’ limits on AI therapy, Colorado’s algorithmic discrimination guidelines and extra. However these legal guidelines are caught in a battle between the states and the federal authorities.
Trump mentioned patchwork state regulation will forestall the US from “successful” the worldwide AI race by slowing down innovation, which is why the Division of Justice fashioned a task force to crack down on state AI laws. The administration’s AI Action Plan, in the meantime, requires slashing rules for AI data centers and proposes a brand new framework to make sure AI fashions are “free from top-down ideological bias,” although it is unclear how that might play out.
Tech leaders like Apple’s Tim Prepare dinner, Amazon’s Jeff Bezos, OpenAI’s Sam Altman, Meta’s Mark Zuckerberg, Microsoft’s Invoice Gates and Alphabet’s Sundar Pichai have met with Trump multiple times since he took workplace. With an more and more cozy relationship to the White Home, Google and OpenAI have welcomed the push to chop authorized purple tape round AI improvement.
Whereas governments dither on regulation, tech corporations have free rein to proceed as they please, calmly constrained by a couple of AI-specific legal guidelines. Complete, enforceable laws might management the hearth hose of harmful slop, however as of now, the individuals accountable for it are both unable or unwilling to take action. This has by no means been clearer than with the rise of AI deepfakes and AI-powered image-based abuse.
Deepfakes: Pretend content material, actual hurt
Deepfakes are probably the most insidious type of AI slop. They’re pictures and movies so sensible we won’t inform whether or not they’re actual or AI-generated.
We had deepfakes earlier than we had AI. However pre-AI deepfakes have been costly to create, required specialised expertise and weren’t at all times plausible. AI modifications that, with newer fashions creating content material that is indistinguishable from actuality. AI democratized deepfakes, and we’re all worse off for it.
AI’s means to supply abusive or unlawful content material has lengthy been a priority. It is why practically all AI corporations embrace insurance policies outlawing these makes use of. However we have already seen that their methods meant to forestall abuse aren’t good.
Take OpenAI’s Sora app, for instance. The app exploded in recognition final fall, letting you make movies that includes your individual face and voice and the likenesses of others. Celebrities and public figures shortly requested OpenAI to stop harmful depictions of them. Bryan Cranston, the actors’ union SAG-AFTRA and the property of Martin Luther King Jr. all reached out with their issues to the corporate with issues, which promised to construct stronger safeguards.
(Disclosure: Ziff Davis, CNET’s guardian firm, in 2025 filed a lawsuit towards OpenAI, alleging it infringed Ziff Davis copyrights in coaching and working its AI methods.)
Sora requires your consent earlier than letting different individuals use your likeness. Grok, the AI software made by Elon Musk’s xAI, doesn’t. That is how individuals have been in a position to make use of Grok to make AI-generated nonconsensual intimate imagery.
From late December into early January, a rush of X customers requested Grok to create pictures that undress or nudify individuals in photographs shared by others, primarily ladies. Over a nine-day interval, Grok created 4.4 million pictures, of which 1.8 million have been sexual, in response to a New York Times report. The Middle on Countering Digital Hate did the same study, which estimated that Grok made roughly 3 million sexualized pictures over 11 days, with 23,000 of these deepfake porn pictures together with kids.
That is hundreds of thousands of incidents of harassment that have been enabled and effectively automated by AI. The dehumanizing development highlighted how straightforward it’s for AI to be weaponized for harassment.
“The perpetrator might be actually anybody, and the sufferer might be actually anybody. If in case you have a photograph on-line, you could possibly be a sufferer of this now,” says Dani Pinter, chief authorized officer on the National Center on Sexual Exploitation.
X didn’t reply to a number of requests for remark.
Deepfakes and nonconsensual intimate imagery are illegal underneath the 2025 Take It Down Act, nevertheless it additionally gave platforms a grace interval (till Could) to arrange processes to take down illicit pictures. The enforcement mechanisms within the regulation solely permit for the DOJ and the Federal Commerce Fee to research the businesses, Pinter says, not for people to sue perpetrators or tech corporations. Neither group has opened an investigation but.
Deepfakes hit on a core situation with AI slop: our lack of management. We all know AI can be utilized for malicious functions, however we do not have many particular person levers to drag to struggle again. Even trying on the large image, there’s a lot turmoil round AI laws that we’re largely compelled to depend on the individuals constructing AI to make sure it is protected. The present guardrails would possibly work typically, however clearly not on a regular basis.
Grok’s AI image-based sexual abuse was “so foreseeable and so preventable,” Pinter says.
“Should you designed a automobile, and also you did not even test if sure gear would explode, you’ll be sued to oblivion,” Pinter says. “That could be a primary backside line: Cheap conduct by a company entity … It is like [xAI] did not even try this primary factor.”
The story of AI slop, together with deepfakes, is certainly one of AI enabling the very worst of the web: scams, spam and abuse. If there’s a constructive facet, it is that we’re not but on the finish of the story. Many teams, advocates and researchers are dedicated to combating AI-powered abuse, whether or not that is by means of new legal guidelines, new guidelines or higher expertise.
Combating an uphill battle
Practically each tech govt who’s constructing AI rationalizes that AI is just the newest software that may make your life simpler. There’s some fact to that; AI will most likely result in welcome progress in medication and manufacturing, for instance. However we have seen that it is a frighteningly environment friendly instrument for fraud, misinformation and abuse. So the place does that go away us, as slop gushes into our lives with no aid valve in sight?
We’re by no means getting the pre-AI web again. The struggle towards AI slop is a struggle to maintain the web human, one we’d like now greater than ever. The web is inextricably intertwined with our humanity, and we’re inundated with a lot pretend content material that we’re ravenous for something actual. Buying and selling instantaneous gratification and the sycophancy of AI for on-line experiences which can be rooted in actuality, perhaps with a bit of extra friction but in addition much more authenticity — that is how we get again to utilizing the web in ways in which give to us moderately than drain us.
If we do not, we could also be headed for a very dead internet, the place AI brokers work together with one another to provide the phantasm of exercise and connection.
Substituting AI for humanity will not work. We have already realized this lesson with social media. The AI slop ocean that was social media is driving us farther from the tech’s unique function: connecting individuals.
“AI slop is actively making an attempt to destroy that. It is actively making an attempt to interchange that a part of your feed as a result of your consideration is restricted, and it’s actively taking away the connections that you simply had,” Carrasco says. “I hope that AI video and AI slop make individuals get up to how far we drifted.”
Artwork Director | Jeffrey Hazelwood
Artistic Director | Viva Tung
Video Presenter | Katelyn Chedraoui
Video Editor | JD Christison
Venture Supervisor | Danielle Ramirez
Editors | Corinne Reichert and Jon Reed
Director of Content material | Jonathan Skillings
