Back to Blog
Technology
April 23, 2025
6 min read
1,184 words

Why We Fired Our 'AI Team'. The Case for Embedding AI Engineers, Not Siloing Them.

We built a dedicated 'AI Team' (8 people, $2M/year). After 18 months, zero features shipped. We disbanded them. Here is why 'Centers of Excellence' are where AI goes to die.

Why We Fired Our 'AI Team'. The Case for Embedding AI Engineers, Not Siloing Them.

In 2024, we did what every ambitious Series B startup does: We built an "AI Team." We hired 8 Machine Learning Engineers. We bought GPUs. We subscribed to every data labeling platform. We called it the "AI Center of Excellence."

The team sat in a corner of the office (literally a different floor). They built models. They presented demos to the Product team. The demos were beautiful. The accuracy metrics were impressive.

But here is the problem: Zero of those models ever shipped to production.

After 18 months and roughly $2 Million in burn, I made the hardest call of my career. I disbanded the team.

We didn't fire them. We embedded them. We took each AI Engineer and placed them directly into a Product Squad (Search, Recommendations, Fraud Detection).

In the next 3 months, we shipped 5 AI-powered features. Revenue went up. Churn went down.

Here is the autopsy of our "AI Center of Excellence" and why organizational structure is the true bottleneck to AI adoption.

Section 1: The "Research Lab" Anti-Pattern

When you create a separate AI team, you accidentally create a Research Lab inside a Product Company.

Research Labs optimize for publications and novel architectures. Product Companies optimize for Revenue and User Satisfaction.

The Metric Mismatch:

Our AI team spent 6 months improving an NLP model's F1 score from 0.87 to 0.91. They celebrated. They wrote internal blog posts.

But nobody asked: "Does a 4% improvement in F1 score translate to a measurable change in user behavior?"

The answer was No. Users couldn't tell the difference. The improvement was imperceptible in the product. We had optimized for a metric that didn't matter to the business.

The "Demo" Culture:

Because the AI team was separate from Product, their only "output" was Demos.

They would build a beautiful Jupyter Notebook presentation. They would show it to the CEO, the Product Lead, and the Investors. Everyone would clap.

Then, the Demo would go into a folder called "AI Demos" and never be seen again.

Why? Because the Product Squad didn't ask for it. They didn't have a roadmap slot for it. They didn't understand the integration work required. The Demo was a solution in search of a problem.

The "Handoff" Problem:

Even when an AI model did solve a real problem, the handoff to Engineering was brutal.

The AI team would say: "Here is the model. It's in a .pkl file. Deploy it."

The Engineering team would say: "This model takes 5 seconds per inference. We need sub-100ms for real-time. Also, it requires 32GB of RAM. Our production pods have 4GB."

The AI team would shrug. "That's an Engineering problem."

The feature would die in the handoff graveyard.

Section 2: The Organizational Structure of Failure

The root cause was not the people. Our AI Engineers were brilliant. The root cause was the Org Chart.

The Matrix Management Trap:

Our AI Engineers reported to an "AI Lead" (who reported to the CTO). But for any given project, they also had a "Dotted Line" to a Product Manager.

This sounds reasonable. In practice, it was chaos.

  • The AI Lead prioritized "Technical Excellence" and wanted to explore new architectures.
  • The PM prioritized "Shipping Fast" and wanted the simplest possible solution.

The AI Engineer was caught in the middle. They pleased neither. They burned out.

The Ownership Vacuum:

When an AI-powered feature failed in production, who was responsible?

  • The AI team said: "We gave you a working model. Deployment is your fault."
  • The Platform team said: "We deployed what you gave us. The model is broken."

Nobody owned the outcome. Everyone owned a piece. Accountability diffused into nothingness.

The PM Knowledge Gap:

Our Product Managers did not understand ML. They thought AI was magic.

They would promise customers: "Our AI will predict churn with 99% accuracy!"

Then, they would be angry when the AI team explained that 70% was the state of the art for their dataset. The PM felt lied to. The AI team felt undervalued. Trust eroded.

Section 3: The "Embedded" Model That Works

When we disbanded the AI team, we did not fire them. We re-assigned them.

The New Structure:

  • The Search Squad (PM + 4 Backend Engineers + 1 AI Engineer).
  • The Recommendations Squad (PM + 3 Backend Engineers + 2 AI Engineers).
  • The Fraud Squad (PM + 2 Backend Engineers + 1 AI Engineer).

The AI Engineers were no longer "AI Specialists." They were Product Engineers who happened to know ML.

They Owned the Feature End-to-End:

The AI Engineer in the Search Squad didn't just "build the model." They:

  1. Wrote the data pipeline to collect training data.
  2. Trained the model.
  3. Wrote the inference service (in Go, not Python, because that's what the team used).
  4. Deployed it to Kubernetes.
  5. Set up monitoring dashboards.
  6. Oncall rotated for it when it broke at 2 AM.

They were not "throwing a model over the wall." They were living with the consequences of their choices.

They Heard Customer Pain Directly:

In the old model, the AI team never talked to customers. They only talked to PMs (who filtered the feedback).

In the new model, the AI Engineer sat in customer calls. They heard: "Your search results suck. I can never find the thing I want."

That direct exposure changed everything. They stopped optimizing for F1 scores. They started optimizing for "Can the user find the thing?"

They Were Incentivized on Feature Success:

Their performance review was no longer "How many papers did you read?" It was "Did Search Relevance improve by 20%?"

Alignment. Finally.

Section 4: How to Disband Your AI Team (The Playbook)

If you have an "AI Center of Excellence" that isn't shipping, here is the playbook.

Step 1: Identify the 3 Product Squads with the Most AI Potential.

Look for Squads that are already data-rich and have clear prediction/classification problems (Search, Recommendations, Pricing, Fraud).

Step 2: Re-Org the AI Team into Those Squads.

This is hard politically. The AI Lead will feel demoted. Handle it sensitively. Frame it as "Empowerment" not "Demotion."

(In our case, the AI Lead became a "Principal ML Engineer" who mentored the embedded engineers but had no direct reports. He was happier coding anyway.)

Step 3: Kill the "Center of Excellence" Slack Channel.

Seriously. If you leave the old channel alive, people will default to asking questions there instead of learning to integrate with their new Squad.

Create a new channel called "#ml-guild" for cross-pollination of learnings (a Community of Practice, not a Center of Excellence).

Step 4: Change the Job Descriptions.

Stop hiring for "Machine Learning Engineer." Start hiring for "Product Engineer (ML Focus)."

The people you want are T-shaped: Deep in ML, but capable of writing production services, debugging infra, and talking to customers.

Conclusion

AI is not a department. It is a capability.

You don't have a "Database Team" that writes all your SQL queries. You expect every Backend Engineer to know SQL. AI should be the same.

Spread the capability thin. Embed the expertise. Kill the silos.

Your "AI Center of Excellence" is where AI features go to die in a beautiful Jupyter Notebook graveyard.

Tags:TechnologyTutorialGuide
X

Written by XQA Team

Our team of experts delivers insights on technology, business, and design. We are dedicated to helping you build better products and scale your business.