Flux+Form report

The AI Adoption Gap for Ad Agencies

Why independent ad agencies get stuck on AI, and how they get unstuck.

A note before you readThis is not a pitch for our workshop. It is a working through of what we have observed across hundreds of conversations with independent agency leaders, in-house creative teams, and the CMOs who hire them. The questions you are sitting with are the questions everyone is sitting with. The data confirms it. What follows is the structural picture of why agencies are stuck on AI, and what stuck looks like up close. If by the end you want to talk about it, we are at the back of this document.
01  /  The squeeze

The fee battle was already lost before AI showed up.

The 15% media commission has been gone for thirty years. What replaced it has been a slow, accelerating compression. Fixed-fee. Project-based. Value-based. Multipliers. Each new compensation model was a new way to ask agencies to do more for less. The 4A’s 2024 Compensation Methodologies study found that 72% of agencies now operate on fixed-fee terms, and the World Federation of Advertisers reports that 87% of marketers believe agencies are resistant to transparent fee structures. The trust deficit underneath the squeeze is documented.

Then AI arrived. And every CFO in every client organization found a new lever.

According to Gartner’s 2026 CMO survey, 65% of CMOs say AI will dramatically change the CMO role within the next two years. Most are not waiting around. Gartner’s BCG companion data shows that 71% of CMOs plan to invest more than $10 million annually in GenAI over the next three years, up from 57% the year prior. The investment is real and the pressure to deploy is real. But here is where the squeeze tightens.

65% of CMOs say AI will dramatically change the CMO role within two years
32% believe significant changes are needed to the CMO profile or skill set
15% of CEOs believe their marketing leaders are currently AI-savvy

By 2027, Gartner projects that lack of AI literacy will rank among the top three reasons CMOs are replaced at large enterprises.

Translation, in plain English: the people writing your check are betting their careers on AI. They do not understand it. They will not admit they do not understand it. And the pressure they are under to project confidence rolls downhill, fast, until it lands on you.

Some of the most incorrect assumptions about AI in marketing come from the people most certain they understand it. The pretending is the problem.
02  /  The DIY trap

You know what to do. You have known for a year. The work just does not get done.

Build the playbook. Train the team. Establish the standards. Get the agency moving in a coherent direction so when a client asks “what’s your AI capability,” you have an answer beyond “we use it.”

Then Monday happens. And Tuesday. The pitch deck. The deadline. The client call that ran long. The team member who left. The new business push. The old client who is “rethinking the relationship.”

The work of figuring out AI is the same size as the work of running the agency. So it does not get done. Or it gets done in the worst possible way: one person reads three articles, watches twelve YouTube videos, attends one webinar, and is now expected to teach the rest of the team. They are not a teacher. They never were. They have no curriculum. The transfer of knowledge fails. The team is mildly more confused than before.

The numbers back this up. Industry survey data from 2025 shows that 73.2% of agencies cite “staff upskilling bootcamp” as their top need for external support, while 51.8% report lack of dedicated time as the primary barrier to AI initiatives. Daily operations are crowding out the strategic AI work.

In-house creative teams have it worse. The In-House Agency Forum reports adoption is climbing fast, generative AI use in in-house agencies jumped from 4% to 25% in a single year. But in-house teams operate inside enterprise IT and Legal environments that exist, by mandate, to reduce risk. AI is the largest risk vector either function has ever seen. Between inadvertent IP disclosure on the way out and inadvertent IP infringement on the way in, no one in the building wants AI to go away more than IT and Legal. WilmerHale’s 2025 AI Privacy Litigation Year in Review documents the rising litigation surface around enterprise AI use. The fear is not paranoia. It is documented.

The result, for in-house teams: they are asked to deliver what an external agency can deliver, with weaker AI tooling (the gap between what enterprise Copilot does and what frontier models do is not subtle), and no specialized training budget that matches the request.

The DIY trap is not a willpower problem. It is a structural one. The work is too big to fit alongside the work.
03  /  The siloing trap

Treating AI as a role-specific tool is wrong in a way that will not be obvious for twelve months.

Here is the second mistake almost every agency makes when they finally do try to address AI: they treat it as a role-specific tool. The Strategist does not need to know how to make video. The Creative does not need to know how to build automations. The Account team needs only the parts that touch client communication. Each function gets the slice that “applies to them.”

This is spectacularly wrong, and it is wrong in a way that will not be obvious until twelve months from now when the agency that ignored this advice is twelve months behind the agency that did not.

AI does not respect role boundaries. The Strategist who can generate twenty rough video boards in an afternoon writes better strategy because the strategy is now testable in motion before it leaves the office. The Creative who can build an automation that handles status updates and deck prep frees up the hours that were getting eaten by ops, and those hours go back into the work. The Account person who understands what synthetic testing actually does can manage client expectations during feedback rounds with language that matches what the production team is doing.

When the whole agency learns the same vocabulary at the same time, three things happen that do not happen when training is siloed.

A shared decision-making framework emerges. The team stops debating whether to use AI on a project and starts debating how and where, which is the right argument to be having. Conversations between Strategy and Creative and Production stop requiring translation, because everyone learned the same words for the same things at the same time.

The cohort effect kicks in. Watching a peer struggle with the same problem and figure it out is the most efficient learning structure that exists. People learn faster in a room with their colleagues than in a room of strangers, because the examples are real.

The least adaptive person on the team is no longer the ceiling. Which brings us to the third trap.

04  /  The fear trap

“What if I prompt myself out of a job?”

The thing nobody says out loud, even to themselves. It is the most honest fear in the building. It is also the one that quietly kills agency AI adoption, because fear is hard to speak about and easy to act on, and most of the action looks like avoidance. Skipping the training session. Not opening the tool. Telling yourself you’ll learn it on your own time. Which is, again, never.

The structural truth: your agency’s AI maturity is correlated with the least adaptive person on your team, not the most. The senior who refuses to engage drags the rest of the senior team. The mid-level who is afraid to be seen experimenting in front of leadership stalls. The junior who watched a colleague get laid off after a “process improvement” knows what they saw, and they will not be the next experiment.

The unlock is empowerment, and empowerment is structural. Not motivational. Not a memo from leadership about how AI will not replace the team. The team has heard those memos. They are unconvinced.

What works is fail-fast. A learning environment where people see errors quickly, see the error inside a structured framework that lets them redirect, and watch a peer get it wrong and then get it right. The pace of recovery from an error is what determines whether a person engages with a tool or avoids it. Make recovery fast and the engagement compounds. Make recovery slow or punitive and the engagement collapses.

The team that learns together is also the team that fails together, in safety, before the client sees any of it.

05  /  The disclosure problem

Clients want the conversation. Agencies have not yet had it with themselves.

Even if you solve the squeeze, the DIY trap, the siloing, and the fear, you are still going to have a hard conversation with your client.

Edelman’s 2025 Trust Barometer is unusually clean on this point: 85% of clients expect disclosure when AI is used in the work they paid for. Nearly half describe disclosure as “extremely important.” This is no longer a soft preference. It is a baseline expectation among the people writing your invoice.

85% of clients expect disclosure when AI is used in the work they paid for
20% of consumers say companies are “very clear” about how AI and data are used
53% assume a brand is “doing nothing or hiding something” when it stays silent

The trust gap is documented. Clients want the conversation. Agencies do not know how to have it, because internally they have not yet had it with themselves. The conversation defaults to either over-promise (we use it everywhere, we are AI-first) or under-disclose (we don’t really use it, just for some small things). Both of those are losing strategies, because both of them assume the conversation is binary.

The conversation that wins is the third one. The one where you tell the client, with specificity, what AI is doing in your studio, what it is not doing, what your standards are for review, what your standards are for non-use, and how you make those decisions. That conversation requires that you actually have those answers internally, which most agencies do not, which is why the conversation does not happen, which is why clients keep filling in the silence with their worst assumption.

Both parties want to convince the other they are capable, and both find out neither are. At least not yet.
06  /  The way out

The answer to a structural problem has to be structural too.

There is no version of this where each agency invents its own answer in isolation. The squeeze is too tight. The IT and Legal context is too thorny. The team dynamics are too complicated. The disclosure conversation is too new.

The answer is shared. Not because togetherness is sentimental, but because the answer to a structural problem has to be structural too.

What “shared” actually has to mean, in practice:

Cross-functional, not role-siloed. The whole agency learns the same vocabulary at the same time. Strategy, Creative, Account, and Production build the shared language together, so when the work moves between them it does not require translation.

Daily practice, not theoretical understanding eventually. Short, structured assignments tied to the work the agency is already doing compound faster than long lectures or self-directed reading. The goal is to make AI feel familiar, fast, so the team stops thinking about the tool and starts thinking with it.

A fail-fast environment. People learn AI by getting it wrong and then watching themselves redirect quickly. Speed of recovery is what separates engagement from avoidance. The learning has to be safe, structured, and forgiving on the first attempt. Punitive learning environments produce the avoidance you are trying to solve.

Internal standards, written down. When the program ends, the agency needs an internal answer, agreed to in writing, for what AI is doing in the studio, what it is not doing, what gets reviewed, what gets disclosed, and how those decisions get made. Without that, every new client conversation starts from zero.

These are the requirements for an answer that actually works. They can be delivered through different programs, by different teachers, in different formats. We built one of them. If the picture in this report matches what you are seeing inside your own agency, the back of the report tells you how to talk to us about it.

About Flux+Form

Flux+Form provides hands-on AI training for independent ad agencies and in-house creative teams. The Creative Cadence Workshop is our flagship program: eight live sessions, weekly Hack Stack assignments tied to your agency’s actual work, and scheduled office hours, structured for cohorts of five to twenty participants. Larger agencies run multiple cohorts. Founded by Jeremy Swiller, holder of the ANA In-House Agency AI Training Instructor credential, with thirty years inside independent advertising agencies and a track record running creative departments at scale.

If any part of this report described your agency, we should talk.

Thirty minutes, no pitch, just a real conversation about whether the workshop is the right fit.

Sources