Thursday, March 26, 2026

AI Leadership Field Notes - Week 3

 How I am using Untamed to map to AI Leadership

A - From External Validation → Independent Technical Judgment

Untamed principle: Stop living for approval
AI leadership translation: Stop copying what other companies are doing with AI

Common anti-pattern

  • “Everyone is doing copilots → we should too”
  • Vendor-led roadmaps

Leadership shift

  • Build first-principles AI strategy, not trend-following

“AI leaders must resist the industry echo chamber. The goal is not adoption—it’s advantage.”


2. Inner Knowing → Product & Platform Conviction

Untamed principle: Trust your inner voice
AI translation: Strong product + platform intuition

Where this matters

  • Build vs buy decisions
  • Model selection (open vs closed)
  • Platform abstraction layers

Actionable lens

  • Do your teams understand why they are building something?
  • Or are they executing tickets?

3. Feel Everything → Don’t Hide AI Risks

Untamed principle: Feel discomfort
AI translation: Face uncomfortable truths about AI

Examples

  • Models hallucinate
  • ROI is unclear
  • Teams are overwhelmed
  • Tech debt increases

Leadership maturity

  • Call out risks explicitly instead of over-selling AI success 
  • Learning: “Mature AI orgs don’t suppress uncertainty—they operationalize it.”

4. Relationships Should Expand You → AI Should Augment Teams

Untamed principle: Relationships should not shrink you
AI translation: AI should expand human capability, not constrain it

Red flag

  • Engineers becoming prompt operators only
  • Creativity being reduced

Leadership goal

  • AI as force multiplier, not replacement mindset

5. Integrity → Alignment Between AI Strategy and Reality

Untamed principle: Inner truth = outer life
AI translation: Strategy ≠ slideware

Check for integrity gaps

  • “AI-first company” but no infra readiness
  • “Data-driven” but poor data quality
  • “Automation” but manual workflows

Leadership responsibility

  • Close the gap between narrative and execution

6. Reject Perfectionism → Ship Iterative AI Systems

Untamed principle: Perfection is a trap
AI translation: Avoid over-engineering AI before learning

Best practice

  • Ship v1 with guardrails
  • Learn → iterate → scale

Anti-pattern

  • Endless evaluation cycles with no deployment

7. Self-Trust → Decisive AI Leadership

Untamed principle: Self-trust is freedom
AI translation: Leaders must make decisions under ambiguity

Reality

  • AI space is evolving too fast for certainty

What strong leaders do

  • Make reversible decisions quickly
  • Accept imperfect information

CLOSING -

“AI maturity starts when you stop asking what others are building.”

Saturday, March 7, 2026

AI Leadership Field Notes — Week 2


Courage and Psychological Safety in AI Teams

Artifical intelligence systems are complex. But the biggest risks in AI rarely come from the code itself.

They come from silence. You may ask - What kind of Silence? 

- From engineers who notice bias but hesitate to raise it
- From researchers who see model limitations but feel pressure to ship
- From teams that optimize for speed while uncertainty quietly accumulates

The most dangerous AI teams are not the least intelligent. They are the least psychologically safe.

This week’s field note is inspired by:

πŸ“˜ Dare to Lead — by BrenΓ© Brown

Although the book is not about technology, its leadership lessons are incredibly relevant for organizations building AI systems that affect millions of people.


Why This Book Matters in AI Leadership

AI development lives in uncertainty.

Models drift.
Data contains bias.
Edge cases appear after deployment.

If teams feel pressure to appear confident at all times, problems stay hidden until they become failures.

True leadership in AI requires something different:

A culture where people can say without fear,
"This model worries me."


Three Leadership Insights I’m Applying

1️⃣ Courage is a Leadership Skill — Not a Personality Trait

Many people imagine courage as confidence.

But in leadership, courage often looks like something else:

Admitting uncertainty.

In AI teams, this can mean saying:

  • “We need more testing before deployment.”

  • “This dataset may be skewed.”

  • “Our model might behave differently in production.”

Practical application

Normalize uncertainty in engineering discussions.

If leaders can say “I may be wrong, let’s investigate”, others will speak more openly too.


2️⃣ Psychological Safety Enables Better Engineering

AI teams solve difficult problems.

But innovation slows when people feel they must protect their reputation.

When psychological safety exists, engineers are more willing to:

Practical application

Replace blame-oriented postmortems with learning-oriented reviews.

Focus discussions on:

  • what the system revealed

  • what the team learned

  • how the process improves

This creates stronger AI systems over time.


3️⃣ Clear Conversations Prevent Invisible Risks

One of the most powerful ideas in Dare to Lead is clear is kind.

Ambiguous leadership creates hidden tension.

In AI teams, unclear expectations can lead to rushed deployments or ignored warnings.

Practical application

Encourage direct conversations about:

  • model limitations

  • data quality

  • ethical concerns

  • operational risk

Transparency reduces the likelihood of unpleasant surprises later.


πŸ”Ž What This Means for AI Teams

AI leadership is not just about technical expertise.

It is about creating conditions where truth can surface early.

When teams feel safe to speak openly:

  • bias gets addressed sooner

  • system risks are understood earlier

  • model improvements happen faster

  • trust across teams increases

Psychological safety is not a “soft skill.”

In AI systems, it is a form of risk management.


πŸ’‘ One Question I’m Asking Myself This Week

Am I creating an environment where engineers feel safe challenging the system — or only safe supporting it?


Closing Reflection

AI leadership requires more than intelligence which is courage.

Courage to ask difficult questions.
Courage to slow down when risk appears.
Courage to admit uncertainty while pursuing innovation.

Because the strongest AI systems are not built by teams that always appear confident, but by teams that are honest about what they still need to learn.

These are my field notes as I grow — not just as an engineer, but as a leader helping shape how AI systems are built responsibly


Sunday, March 1, 2026

AI Leadership Field Notes — Week 1


Owning Technical Influence in the Age of AI

In AI, technical competence is assumed.

Leadership visibility is not.

As artificial intelligence reshapes products, operations, and strategy, the leaders who shape its direction are not just the most technically fluent — they are the most willing to step forward.

This week’s field note is inspired by:

πŸ“˜ Lean In — by Sheryl Sandberg

While not an AI book, its leadership lessons are deeply relevant for women navigating engineering organizations undergoing AI transformation.


Why This Book Matters in AI Right Now

AI initiatives often begin in experimental pockets — a data science team here, a proof of concept there.

But scaling AI requires cross-functional ownership, executive sponsorship, and visible leadership.

And this is where many technically strong women hesitate.

Not because of capability.

Because of visibility.

AI is too important to shape quietly from the sidelines.


Three Leadership Insights I’m Applying

1️⃣ Sit at the Table — Especially in AI Strategy Conversations

In many organizations, AI roadmaps are decided in rooms where engineering, product, and business converge.

If you are technically fluent and understand system impact, you belong there.

Practical application:

  • Volunteer to present model performance trade-offs.

  • Lead discussions on AI governance risks.

  • Offer structured recommendations, not just technical updates.

AI leadership requires voice, not just expertise.


2️⃣ Seek Sponsorship, Not Just Mentorship

Mentors give advice.
Sponsors create opportunity.

As AI initiatives expand, high-visibility projects emerge:

  • Enterprise AI platforms

  • Ethical review boards

  • Cross-functional ML scaling efforts

Being technically capable is not enough. You must be seen as strategically essential.

Practical application:
Identify leaders who influence AI investment decisions and demonstrate how your expertise supports business outcomes — not just technical execution.


3️⃣ Communicate in Outcomes, Not Just Models

AI conversations often stay trapped in architecture and accuracy metrics.

Leadership conversations happen in impact.

Translate:

  • Model accuracy → revenue lift or risk mitigation

  • Bias detection → brand trust

  • Latency optimization → customer experience

Influence grows when you connect intelligence to value.


πŸ”Ž What This Means for AI Teams

AI transformation is not purely technical.

It is political, strategic, and cultural.

When women step into visible AI leadership roles:

  • Ethical concerns surface earlier

  • Risk discussions become more nuanced

  • Cross-functional alignment improves

  • Long-term thinking strengthens

Representation at the table changes what gets prioritized.

And what gets built.


πŸ’‘ One Question I’m Asking Myself This Week

Where am I contributing expertise quietly, when I should be contributing direction visibly?


Closing Reflection

AI leadership is not about dominating rooms, It is about shaping decisions.

It is about ensuring intelligent systems are guided by intelligent judgment.

And that requires stepping forward — even when you would prefer to perfect the model first.

These are my field notes as I grow — not just as an engineer, but as a leader shaping how AI is built and deployed.

AI Leadership Field Notes - Week 3

 How I am using   Untamed  to map to AI Leadership A - From External Validation → Independent Technical Judgment Untamed principle: Stop ...