When New Technology Strives to Be Human

A hand holds a Polaroid of a city skyline against a blurred urban backdrop.

The idea that technology should imitate humans is everywhere. From Alan Turing’s famous “imitation game” to the latest chatbots that aim to sound just like you and me, the goal of artificial intelligence (AI) has often been to mirror human behaviours. But according to the recent interview with Erik Brynjolfsson in the Financial Times, this human-imitation ambition might be holding us back rather than propelling us forward.

Here’s a deeper dive into the argument, what the article covered — and some further layers it didn’t fully explore. Plus: key questions answered at the end.

A joyful couple enjoying drinks on a rooftop with a stunning view of Buenos Aires at sunset.

🤖 The Core Argument: Why Imitation Might Be the Wrong Benchmark

Brynjolfsson introduces what he calls the “Turing Trap” — a scenario where researchers, companies and society focus on building machines that imitate human behaviour rather than machines that augment or expand what humans can do.

Key features of this problem include:

  • Imitative systems aim to substitute humans (e.g., chatbots replacing customer service) rather than collaborate with humans.
  • When machines simply copy human labour, the economic benefits tend to concentrate in capital & tech-owners, rather than broadly diffusing.
  • Because humans serve as the benchmark (“If a machine can do what a human can do”), we set a possibly too low ceiling on innovation. We miss the chance to build machines that do what humans cannot.

Thus, the broad claim: Technology should move beyond human imitation. It should focus on capability expansion — enabling new forms of work, creativity, decision-making, and value that humans alone cannot deliver.

📋 What the FT Article Covered

  • The origin of the human-imitation goal in AI, tracing back to the Turing Test.
  • The critique that many current AI systems pursue tasks that humans already do — so it becomes automation rather than augmentation.
  • How institutional, business and policy incentives (labour substitution, cost-cutting) favour mimicry over collaboration.
  • Brynjolfsson’s policy suggestions: redefine firm-level metrics to reward human-machine synergy, promote competition and diffusion of technology, build public infrastructure (data, audit regimes) to support augmentation rather than pure substitution.

🔍 What the Story Didn’t Fully Explore — Additional Dimensions

Here are some deeper layers worth adding to flesh the topic out:

1. The historic role of imitation in human learning and culture

Imitation is not bad by itself. Humans learn by copying—from early childhood we mimic gestures, speech and behaviour. This “over-imitation” helps transmit complex skills and culture over generations. But when machines only mimic humans rather than extend them, we may get stuck. (As research in psychology shows, children copy even pointless actions, because that’s how culture gets transmitted.)

2. Technical challenges and implications of moving beyond imitation

Imitation learning (robots copying humans or large language models mimicking text) is technically tractable and appealing. But building machine systems that go beyond human capabilities—or complement them in non-human ways—poses higher demands: novel architectures, new benchmarks, unpredictable behaviour, more risk. See research on “Thought Cloning” or “Co-Imitation” in robotics.

3. Economic and social consequences

When machines are designed to replace human tasks, this can:

  • Reduce worker bargaining power (because substitution is easy)
  • Concentrate economic gains among machine-owners or platform owners
  • Reduce the impetus for human skill-development (if machines aim to replicate rather than augment)
    Brynjolfsson hints at this — but the broader consequence is a slower pace of value creation and widening inequality.

Businesswoman using tech gadgets for productivity and communication. Modern office setting.

4. Ethical and identity oriented questions

If our goal is to build machines that are just like us, this raises questions about authenticity, human uniqueness and purpose. Research suggests AI’s “perfect imitation” may actually challenge our sense of what is human.

5. Practical pathways and alternatives

Beyond the policy recommendations, there are practical strategies:

  • Design metrics that reward human-machine teams (not just “machine replaces human”).
  • Encourage open systems that allow augmentation (e.g., copilots, decision-support) rather than closed substitution systems.
  • Rethink education/training — focusing on human skills that machines cannot replicate (judgment, creativity, social relationships).
  • Innovation policy should unlock new capabilities rather than mere automation.

🧭 Why This Matters — For Firms, Individuals & Society

  • For firms: Rethinking AI strategy from “substitute human labour” to “amplify human value” could unlock new business models, avoid commoditisation of labour, and lead to more sustainable growth.
  • For individuals/workers: If technology continues to mirror rather than extend humans, jobs and skills become easily substituted. Focusing on roles where humans add unique value (empathy, creativity, relationships) becomes even more important.
  • For society: The choice of technology design shapes who benefits. Imitation-focused tech may concentrate benefits; augmentation-focused tech can spread benefits, foster innovation and reduce inequality.

❓ Frequently Asked Questions (FAQs)

Q1: Isn’t human-imitation a good goal? If a machine can replicate human performance, isn’t that progress?
It can be progress. But it may also limit what technology achieves. If the benchmark is “do what a human does”, then machines cannot surpass humans or unlock new domains. The risk is a ceiling rather than a floor for innovation.

Q2: What is the “Turing Trap”?
It’s the idea that focusing on AI systems that imitate human intelligence (because “they pass as human”) diverts effort away from systems that amplify human capability. The trap lies in substitution rather than collaboration.

Q3: What does “augmentation” mean in this context?
Augmentation means designing machines that enhance human work—in ways humans cannot do alone. Examples: tools that help doctors diagnose faster while humans focus on judgment; tools that help designers explore possibilities, while humans choose vision.

Q4: How should companies change metrics or business models accordingly?
Instead of simply measuring automation (e.g., number of jobs replaced, cost savings), firms should measure human-machine team outcomes: customer satisfaction, innovation rate, employee redeployment, quality improvements.

Q5: What role do policymakers have here?
Policymakers can influence incentives (taxes, regulation) to favour augmentation. They can promote open data, interoperability, competition, and diffusion of technologies rather than concentration. They can also ensure workforce training focuses on human skills.

Q6: Is there real technical evidence that imitation-focused AI is sub‐optimal?
Yes — research suggests systems trained purely on imitation may lack generality, novelty and flexibility. Some robotic research shows copying human motion limits system capability to novel forms.

A young woman poses next to large letters spelling

🔮 Final Thought

The question “Should technology imitate humans?” may sound like philosophy, but it’s deeply practical. If we build for imitation alone, we may limit what tech can achieve. We might replace humans where we should partner with them. We might focus on the tasks we already do rather than imagine what we could do with machine help.

The better vision: machines and humans working together, each doing what they do best and unlocking entirely new possibilities. That’s not just a nicer goal—it might be the smarter one.

Sources Financial Times

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top