blog

Working with AI is like working with fire - here’s how to make sure you don’t get burned

Written by Khaled Nassra | Jul 21, 2025 2:58:45 AM

My first office job was with SAP, supporting the launch of SAP Leonardo in the telecom space. At the time, “AI-powered” usually meant a stack of conditional logic with a slick UI. Watson felt like the most impressive tool we’d get for years. Sales teams moved on quickly once they realised the limitations.

Then came Google’s Attention Is All You Need. That paper introduced the transformer architecture behind everything we now associate with generative AI: ChatGPT, Gemini, Claude, LLaMA. It changed the landscape completely.

Since then, I’ve helped bring Azure AI into subscription software, guided early Copilot solutions to market, and continue building Enki using AI every day. So, when I picked up Ethan Mollick’s Co-Intelligence: Living and Working with AI, I was benchmarking my work as I was reading.

The cover caught my attention right away: a hand reaching for an orange. It didn’t scream “AI,” but it made me think about the agricultural revolution. A moment when we shifted from scarcity to abundance. AI might offer the same kind of leap, if we use it wisely.

Co-intelligence starts with the right questions

Mollick doesn’t sell AI as a miracle. He walks readers through how to use it well—how to work with it rather than against it. His premise is simple: humans and machines can think together. He frames this relationship using four principles:

  1. Invite AI to the table and learn what it can do
    Explore the tools. Get curious. Push them and watch what they return.

  2. Stay in the loop
    AI works best when a human steers the process. It should extend your ability, not replace your judgement.

  3. Tell it who it is
    Define a tone or perspective. For example, ask it to act like a legal researcher or an empathetic coach.

  4. Assume this is the worst AI you’ll ever use
    Improvements will come quickly. Don’t underestimate what’s around the corner.

These ideas reflect how I already use AI. I almost always assign it a role (e.g. coach, writer, programmer). I test boundaries, adjust prompts, and course-correct. But the fourth principle made me pause. I still catch myself avoiding tasks because older versions of ChatGPT couldn’t handle them well. That hesitation no longer holds up.

A year ago, I asked ChatGPT to build a version of the classic Snake game in Python. It needed a lot of back-and-forth. This summer, I ran the same prompt through GPT-4o and got a working game instantly. That alone validated Mollick’s point: today’s weakest AI will soon look like a relic.

AI doesn’t take your job, it redefines it

Mollick draws a helpful distinction between AI acting as a tutor, a coach, and a co-worker. In practice, it can play all three roles in a single project.

Take one of my clients: a mid-market AP automation vendor. Their software uses AI to manage invoice approvals in Microsoft Dynamics. On the surface, that might sound like it threatens the role of AP clerks. But users don't see it that way.

They see a chance to shift from repetitive tasks to more strategic ones. With cleaner workflows, they also train and promote team members faster. The AI doesn’t eliminate their function, it frees up time to improve it.

That’s a useful lens for anyone working in finance, consulting, or technology. If AI cuts friction and creates capacity, the smartest move is to re-invest that capacity in higher-value work.

The human element is still essential

In one section, Mollick runs the same question through ChatGPT using three different tones: cold, neutral, and curious. Each tone prompts a different result.

The cold tone triggers a defensive response. The curious one leads to openness and deeper analysis. That’s something I’ve noticed myself, although I used to joke about it. After reading Co-Intelligence, I stopped joking. Politeness and tone aren’t just ethical choices, they improve outcomes.

Recent research backs this up. Prompt tone affects how language models perform across cultures and languages. So, if your default is to bark orders at a tool like it’s a search engine, you’re leaving value on the table.

In enterprise software, the race is already on

ERP vendors are leaning into AI. Microsoft leads with Copilot across Dynamics 365. Salesforce has Einstein. SAP offers Joule. These stretch beyond features to signal a shift from software that records data to systems that suggest action.

That changes the competitive landscape so it’s not about access anymore. It’s about execution. The advantage goes to the companies that know how to apply AI, inside and outside of their systems.

This reminds me of when ERPs first entered the mainstream. Companies that adapted quickly were able to model scenarios, forecast more accurately, and scale operations. AI is following that same trajectory.

As much as I appreciated Mollick’s optimism, I wanted more discussion about governance, trust, and industry accountability.

There’s a lesson here from Relay Robotics, which appears in Jake Knapp’s Sprint. The team was building hotel service robots. When the robots looked too human, guests expected more than the bots could deliver. That mismatch created disappointment.

The same problem applies to AI in ERP. If you promise too much, users lose trust fast. Even when performance improves, they may not come back. It’s better to show the limitations and train your users to succeed alongside the tool.

What if AI is your cofounder?

Mollick closes with a question I’ve asked myself more than once: if AI contributes to your decisions, drafts your content, and supports your operations, does it count as a cofounder?

For me, probably not. But it’s definitely a business partner. I use AI to accelerate onboarding, stress-test ideas, design workshops, and translate concepts into client-ready outputs. It handles some of the heavy lifting so I can focus on vision and value.

Whether you view yourself as a centaur (dividing tasks between human and machine) or a cyborg (blending the two), you’re part of a new era. We’re no longer just managing systems, we’re co-thinking with them.

At Enki, we embrace this shift. We help clients adapt their teams, tools, and thinking to build better systems, not just faster ones. If AI is fire, then co-intelligence is how we build with it instead of getting burned.

Interested in learning more about what's next in the world of AI and agentic workflows? Download our latest guide below: