
Emily Broadmore
Emily’s workshop equipped participants with practical strategies to align AI adoption with creative intelligence—emphasising the need to balance data-driven tools with human qualities like intuition, empathy, and meaningmaking. Participants came away with clear approaches to using AI while strengthening human-centred leadership and governance.
A key focus of the session was identification of issues to watch for in your team when using/implementing the use of AI. These included:
1. Cognitive Offloading
Cognitive offloading refers to the process of shifting mental tasks from the human brain to external tools—in this case, AI systems. When using AI, we increasingly rely on it to perform tasks such as writing, analysis, decision-making, and memory recall. While this can improve efficiency and free up mental bandwidth, over-reliance on AI may lead to reduced critical thinking, problem-solving, and even basic recall skills. The risk is that leaders and teams may become too dependent on AI outputs, weakening their own judgement and capacity to challenge or improve those results.
2. Loss of Creative Intuition
AI excels at pattern recognition and replicating existing forms of content or solutions, but it doesn’t generate originality in the way humans do. Overuse of AI—particularly in ideation, design, or strategy—can dull our creative instincts. When organisations default to AI to brainstorm or develop ideas, they may lose the unique human spark that comes from intuition, lived experience, and serendipitous thinking. This loss could mean fewer breakthroughs and a decline in innovation over time.
3. Trust in the Tools
Trust in AI tools is essential—but it must be well-placed. As AI becomes more embedded in decision-making processes, users must understand its capabilities, limitations, and biases. Blind trust in AI can lead to errors being accepted without scrutiny, while too little trust can prevent its effective use. Building appropriate trust means fostering AI literacy across teams, ensuring transparency in how AI systems work, and creating feedback loops where human oversight remains integral.
4. Good Enough Culture
A “good enough” culture emerges when AI is used to produce fast, functional outputs that are acceptable, but not necessarily excellent. This mindset can be tempting in fast-paced environments where speed and scale matter. However, settling for AI-generated mediocrity can gradually erode quality, depth, and originality especially in areas like communications, learning design, or customer engagement. It’s important to balance efficiency with the pursuit of excellence and maintain human review and refinement as a key step in the AI workflow.
5. The Expertise Gap
As AI tools become more sophisticated, there’s a growing gap between what the tools can do and what users understand about how they work. Many people use AI without fully grasping how it processes data, where its biases come from, or how its outputs are generated. This gap can lead to poor decision-making, compliance risks, or ethical oversights. Bridging the expertise gap requires upskilling, transparency, and cross-functional collaboration—bringing together technical, ethical, and domain expertise to ensure AI is used responsibly and effectively.
Workshop participants were then set free in the world of AI to write a LinkedIn post using AI technology but at the same time questioning and critiquing the language that came back.
More free leadership and AI resources are available for download from the Heft website: www.heft.co.nz