In the midst of all the discussion about AI, a recent client email offered a timely reminder.
After Jess had supported a client through a tricky issue, the client replied with a simple line:
“Thank you Jess. It’s fairly evident that Google and ChatGPT are definitely not in the same calibre as you!”
It wasn’t a dig at technology. It was a reminder. A reminder of the value of human judgement, context, and experience – especially in complex, high-stakes situations like HR, IR and workplace relations.
AI is impressive… and that’s part of the problem
Like many businesses, we’ve been actively exploring how AI can support the way we work. And let’s be honest – “Chatty” (as we fondly call him/her/them) can produce a brilliant-sounding response to almost any question in seconds.
The challenge? Brilliant-sounding does not mean correct.
We’re increasingly seeing situations where AI-generated content is leading people – employers and employees – down very wrong paths. Advice that sounds confident but is legally or practically flawed. Assertions that are neatly structured, well-worded… and fundamentally off-base.
In HR and IR, where nuance, context and consequences matter, that’s not just inconvenient. It’s risky.
The double-edged sword we’re already seeing
AI will be a game changer across business – in wonderful ways and in some genuinely challenging ones.
One example we’re seeing more frequently is AI-generated employee grievances. These can be 10, 20, even 70-page documents that:
-
Repeat the same point multiple times in different wording
-
Use highly emotive and accusatory language
-
Cite legislation and case law that ranges from marginally misquoted to completely incorrect
They look authoritative. They feel overwhelming. But when you strip them back, around 95% of the content is either irrelevant, wrong, or unnecessary.
Without a skilled human in the loop, it’s very easy for these documents to escalate conflict, entrench positions, and derail resolution.
AI is powerful – but it needs to be applied well
None of this is an argument against AI. Quite the opposite.
AI is powerful, efficient, and here to stay. The organisations that thrive will be the ones that learn how to apply it well – not blindly, but deliberately and thoughtfully.
Which leads me to a prediction.
Each year, organisations like the Macquarie Dictionary and Oxford University Press announce a “word of the year”. My early prediction for the end of 2026? Discernment
In a world where we can generate letters, policies, complaints, legal briefs, responses and marketing copy in seconds, discernment will be the critical skill. The ability to:
- Evaluate whether the output is accurate and appropriate
- Spot what doesn’t quite ring true
- Remove the guff
- Apply judgement before pressing “send”
Without discernment, AI can help you move confidently and speedily … straight up the garden path.
Why this matters for teams and leaders
Patrick Lencioni’s Working Genius model identifies Discernment as one of the six core working geniuses – defined as “the natural gift of intuitively and instinctively evaluating ideas and situations.”
In an AI-enabled workplace, this is a capability worth actively valuing and developing. It may even be something we need to start looking for more deliberately in our teams.
For those interested, there’s a Working Genius profile you can complete online – we have no affiliation, we just genuinely love the framework. Learn more here.
What this means for employers
- AI can support productivity, but it should not replace judgement
- Human review and context are essential in HR and IR matters
- Poorly applied AI increases risk, not efficiency
- Discernment is fast becoming a core workplace capability
At Focus HR, we’re excited about what AI can do – with humans firmly in the loop for the important stuff. Technology can generate the words. Experience, judgement and discernment decide whether they’re the right ones.
If you’d like support navigating AI-related HR or IR issues, or sense-checking something that “sounds right” but doesn’t quite feel it – we’re here.
