The New Real Battle on Big Content vs AI

people in sofa

The headline suggests a dramatic face-off: traditional media titans (big content) standing up to generative AI and tech firms. But in reality, the fight is far more complex, and the conventional “David vs Goliath” narrative doesn’t hold up.

What’s happening

  • Large entertainment and music corporations (e.g., Universal Music Group (UMG)) initiated lawsuits against AI-startups for using their catalogues to train models without explicit permission.
  • At the same time, these same corporations are making deals with AI companies (for example UMG partnering with an AI music platform) to monetise AI-driven tools.
  • Artists and creators are bearing much of the risk: loss of income, rights erosion, pressure to sign away voice/likeness data.
  • Behind the scenes, big tech (e.g., AI model-builders) and big content firms are finding common ground — licensing, partnerships, mutual benefit — rather than pure antagonism.
3750

Why the “David vs Goliath” frame is misleading

  • The “David” (artists/creators) are in fact weaker in power relative to both big content and big tech. The true Goliaths may instead be the media conglomerates + AI firms, who control distribution, data, infrastructure and monetisation.
  • The fear: by using copyright litigation and licensing as leverage, big content may consolidate power rather than liberate artists. As the article puts it, “exclusive licensing deals between large media and tech companies while everybody else gets sort of left out in the cold.”
  • Also, copyright law itself is shown to be a blunt instrument when confronting AI’s disruption. It may protect some rights, but it doesn’t guarantee fair compensation or agency for creators.

Additional context (beyond the original article)

  • Generative AI in media and entertainment is not just about lawsuits: it’s about scale, personalisation, cost-efficiency, and new business models. According to the investment bank Morgan Stanley, media companies can use AI to produce high-quality content faster and cheaper, and to personalise experiences at scale.
  • The technology trend: multimodal AI (text, image, audio and video) is now able to synthesise content in multiple formats, making it possible to automate parts of filmmaking, music creation, advertising, gamification.
  • Ethical & governance concerns: At the World Economic Forum, the impact of generative AI on creative industries was flagged as needing a human-centric governance model — how to protect labour, rights, trust and quality.
  • Market dynamics: Studies of generative AI in two-sided content platforms (creators vs audience) show risks of oversupply (too much content), concentration of attention on “top creators”, and welfare impacts for the many smaller creators.

What’s at Stake

For Creators

  • Rights and income: When AI models are trained on large databases of creative works (music, images, films), the question is: will individual creators get compensated, credited, or even have a say?
  • Control of likeness/voice/data: Younger or less powerful artists are reportedly asked to sign contracts handing over voice/face/voice-clone rights for long periods.
  • Labour risk: Some surveys show illustrators and audiovisual creators anticipating significant revenue loss (one study projected ~21% revenue loss by 2028).

For Big Content (Media Conglomerates)

  • Opportunity: They own extensive libraries, distribution networks, recognizable brands. With AI, they can monetise those assets in new ways (licensing, dataset creation, spin-offs).
  • Risk: If creators revolt or unions push hard, the labour model could shift. Also, if tech firms bypass media companies, their business could be sidelined.
  • Strategic manoeuvre: By aligning with tech, media firms may reinforce their dominance rather than act purely as protectors of creators.

For Big Tech / AI Firms

  • Data hunger: Generative models need huge volumes of training data, including copyrighted works. Accessing them legally (or not) is a huge issue.
  • Infrastructure costs: Model training and deployment are expensive; large firms can absorb them, smaller ones struggle. The original article points this out: “Google and OpenAI can afford it; smaller open-source devs cannot.”
  • Regulatory risk & reputation: Tech firms must navigate copyright law, labour pushback, misuse of generative tools (deepfakes, impersonation), governance demands.

For Audiences / Public Interest

  • Content quality: With scale and cost cutting, there’s a risk of “AI slop” – low-effort, high-volume content that floods feeds.
  • Diversity & creativity: If the bulk of creative power is consolidated among large players, niche voices may get squeezed, reducing cultural diversity.
  • Authenticity & trust: Deepfakes, voice/cloning rights, manipulated media – all pose risks to trust in media. Governance and transparency become crucial.

a Closer Look: Licensing, Litigation & Labour

Licensing & deals

Media companies and tech firms are striking licensing agreements — media companies licensing their assets for AI training, tech companies gaining rights to use large libraries. This can appear beneficial: creators might earn royalties or share in revenues. But key questions remain:

  • Are the creators (musicians, writers, actors) directly compensated or is the deal done with the media company and the creators are left with minimal upside?
  • Are creators given the choice to opt out of having their work used in training datasets? Some reports say “no”.
  • Will smaller creators be shut out of these deals because they lack bargaining power?

Litigation

Copyright lawsuits have been brought: e.g., large studios suing AI image generators for using their libraries without permission.
But litigation alone is unlikely to produce broad, structural protections for creators:

  • Courts are still wrestling with how copyright law applies to generative AI training.
  • Even if some rulings favour creators, smaller creators may not benefit because the power for negotiation remains concentrated.
  • Legal remedies are often reactive rather than proactive; many creators must rely on companies, unions or regulatory change.

Labour & unionisation

One of the article’s key arguments: the strongest protection for creators may not be copyright alone — but organised labour (unions, collective bargaining).
When creative workers (writers, actors, illustrators) have a collective voice, they can negotiate rights, compensation, protections for AI use.
This means creators should watch not only legal/technical developments but also their labour rights, contracts, platforms and how AI is incorporated into workflows.

Why Big Content Wants to Frame Themselves as “Protectors of Artists”

It’s worth unpacking the motives here, because the narrative matters.

  • Public relations benefits: By aligning with creators in the anti-AI rhetoric, media companies gain legitimacy and goodwill among artists and fans.
  • Strategic positioning: If media companies become gatekeepers of training datasets (i.e., they own the rights to licence material to AI firms), they monetise both the past (archives) and the future (AI-powered new content).
  • Risk management: By being part of the AI deal-making, they reduce the risk of being sidelined if tech firms bypass them entirely.
  • Power consolidation: The article argues that what seems like a fight against big tech can become a consolidation of power among fewer large players (big content + tech) while smaller creators are sidelined.

A woman in a café using a smartphone, surrounded by warm lighting and a cozy atmosphere.

What’s Being Missed or Under-Explored

Here are some gaps or additional dimensions that deserve more attention:

  1. Global dimension – Much of the commentary focuses on the U.S. (US courts, US media companies). But generative AI is a global phenomenon: emerging markets, non-English languages, and non-Hollywood creatives face distinct issues (regulation, labour conditions, local markets).
  2. Platform/algorithmic power – How platforms (YouTube, TikTok, Spotify) choose to recommend AI-generated vs human content, how algorithms treat new training-data-backed content, and how that affects creators’ visibility and income.
  3. New creator business models – Beyond being paid via legacy channels, how are creators exploring direct-to-fan, subscription, NFTs, micro-licensing and other models as a hedge against AI disruption?
  4. Technological differentiation & quality – Not all AI is equal. The difference between low-quality “slop” content and high-quality AI-assisted creative work, and how audiences respond to each.
  5. Regulatory & ethical ecosystem – Legislative efforts (e.g., digital replication rights, deep-fake laws) are moving—but often slowly. Transparent governance, auditing of models, data provenance, and consent are still under-emphasised.
  6. Impact on creative process – The article touches on labour displacement, but less on how the creative process itself is changing: workflows, hybrid human-AI teams, new skillsets (prompt-engineering, AI-editing).
  7. Economic models of AI training – Who pays for the training data? How are royalties or licensing fees structured? How is value distributed when a derived work is produced?
  8. Public interest & diversity – The focus is mostly on creators vs companies; less on how AI affects cultural diversity, minority voices, local languages, and less-resourced creative sectors.

Where Things Might Go (Scenarios)

Here are three possible future scenarios (not predictions, just possibilities):

A. Creator-Centric Model Strengthens
Creators organise (via unions, associations) and negotiate clear terms: opt-out rights for training, fair compensation, transparency of dataset use. Media companies and tech firms must comply or risk reputational/legal cost.

B. Consolidation & Gatekeeping
Media conglomerates and tech firms dominate the AI value chain: they licence, train, distribute, monetise. Creators become contractors with little agency; smaller players are excluded. Audiences get more content, but less diversity, more corporate-led.

C. Hybrid Ecosystem
A mixed model: some large players dominate, but new platforms and models emerge that empower independent creators using AI tools themselves (democratised creation). Regulation and platform design help distribute power more evenly, while creators adapt with new business models.

What You Should Watch

  • Major copyright lawsuits and their outcomes (e.g., infringement claims by studios or labels).
  • Licensing deals between media companies and AI firms — which creators are included, what rights are signed over.
  • Labour union negotiations around AI usage in creative industries (writers, actors, musicians).
  • Platform policies on AI-generated content (monetisation, visibility, attribution).
  • Regulatory developments (digital rights, deep-fake laws, data-use transparency).
  • How audience preferences evolve: do they prefer purely human-made content, hybrid human+AI content, or AI-only?
  • New creator tools: Are there AI-tool platforms that give creators more agency rather than fewer?
  • Global inequities: Impact on creators in non-US markets, languages, local economies.

Common Questions About the Topic

Here are the most frequently asked questions (and simpler answers) about this evolving landscape:

Q1. Are big media companies really fighting AI or working with it?
A: Both. They are fighting in court and in public rhetoric (copyright suits, lobbying) and they are partnering with AI firms behind the scenes (licensing deals, co-development). The “fight” is not binary.

Q2. If a company licences my work for AI training, do I automatically get paid?
A: Not always. Smaller creators may be left out of the deal or offered minimal compensation. The terms vary widely. Opt-out rights may be limited or absent.

Q3. Will AI replace human creators?
A: Unlikely in the near term for quality, original, human-driven content. But AI will replace or augment many tasks: background visuals, routine writing, basic scripting, templated music. The value will shift more toward unique human creativity, emotion, voice.

Q4. What rights should creators demand?
A: At minimum: consent for use in training datasets; fair compensation or revenue share; attribution/credit; opt-out ability; transparency about where and how their work is used; protections for likeness/voice cloning.

Q5. Does copyright law protect creators from AI misuse?
A: Partially—but copyright law was not built for generative AI. It may offer some remedy (infringement claims) but doesn’t guarantee fair business terms, and it may favour large rights-holders over individual creators.

Q6. How does this affect audiences?
A: Audiences could gain more content, more personalisation, lower cost. But there are risks: fewer unique voices, more homogenised output, fatigue from low-quality “quantity over quality” content, deepfake or manipulated media risks.

Q7. What can independent/smaller creators do?
A: They can:

  • stay informed about contract terms and training-data rights
  • explore direct-to-fan models (subscriptions, crowdfunding)
  • leverage AI tools themselves to enhance their craft (not just be at the mercy of big platforms)
  • join or form unions/associations to amplify collective power
  • push platforms for visibility and fair algorithmic treatment.

Q8. Will regulation solve everything?
A: No single regulation will change things overnight. Good regulation helps (data rights, transparency, deep-fake laws) but must be combined with platform policy, labour organisation, creator business model innovation and ethical design of AI.

Q9. Are we heading toward more creative freedom or more corporate control?
A: It’s not predetermined. The outcome depends on how power is distributed: if media + tech firms dominate, corporate control could increase; if creators organise and new tools platforms emerge, creative freedom could expand. The choices being made now matter.

Q10. What should audiences care about?
A: Audiences should care about: supporting creators who are fairly compensated; being aware of AI-generated vs human content; questioning how algorithms and platforms shape what we see; demanding transparency and authenticity in media.

Smiling woman using phone with city skyline in the background, enjoying urban view.

Final Thoughts

The clash between “big content” and “AI” isn’t simply a story of media giants battling tech giants. It’s far messier—and the real underdog may be the individual creator.
While big content may proclaim its allegiance to artists, the partnerships they forge with AI firms and the structures they build may end up consolidating power rather than distributing it.
For creators, the moment is crucial: understanding rights, forging new business models, organising labour, and adapting to hybrid human+AI workflows may determine who wins in this new era.
For audiences and society, the stakes include: diversity of voice, authenticity of content, trust in media, and how culture evolves when creation scales with machines.

If we just cheer on media companies as champions of creators — without scrutiny — we may miss that the real power is shifting somewhere else entirely. Creators, audiences, platforms and regulators will all shape what comes next.

Sources The Guardian

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top