Blog
Answering your questions from the AI as a stakeholder webinar
In this blog, panelists from our recent webinar on “AI as a stakeholder” get to answering all your burning questions.
This is the “wpengine” admin user that our staff uses to gain access to your admin area to provide support and troubleshooting. It can only be accessed by a button in our secure log that auto generates a password and dumps that password after the staff member has logged in. We have taken extreme measures to ensure that our own user is not going to be misused to harm any of our clients sites.
AI has become a powerful stakeholder in its own right — from being just another ‘technological advancement’ to an active contributor to modern-day communications, that’s massively changed the media landscape today.
Isentia hosted an essential conversation with Lisa Main (Director, Main Bureau), Dr Nici Sweaney (Founder and Director, AI Her Way), Prashant Saxena (Isentia’s VP of Revenue and Insights, SEA), and Ngaire Crawford (Isentia’s Director of Insights, ANZ). Together, they explored how AI reshapes the world of communications and corporate affairs all the while figuring out how to manage and strategically engage with it.
In this session, we covered:
Following the webinar, our panellists took the time to answer the most insightful questions from our attendees that we couldn't get to during the live session. Here are their expert perspectives.
As the Founder and Director of AI Her Way, Dr Nici Sweaney advocates for a strategic approach to AI that prioritises human intent over technical capability. The questions directed to her focused on the ethical foundations of AI, how organisations should structure their internal AI strategy, and practical ways to start using agents today.
Ethical AI, to me, is about two things working together: avoiding harm and actively doing good. It’s not just “don’t break anything” — but genuinely asking, does this create value for the business, for the people using it, and for the broader world? Transparency, equity, and accountability are the pillars. Transparency means being honest with your audience and colleagues about when AI is involved. Equity means asking who this helps and who it leaves behind, as AI scales existing biases. Finally, accountability means humans stay in the loop. AI should inform decisions, not make them. When the "why" is clear — like saving a team time to focus on strategy — you are using AI with integrity.
My answer is probably not what IT wants to hear. AI is part of your infrastructure, so IT must be involved for security and guardrails. However, the strategy behind adoption is fundamentally a human problem, not a technical one. I advocate for a cross-functional "coalition" that brings IT, HR, communications, and strategy to the same table. If you create a dedicated AI leadership role, that person should sit closer to human-centric functions like HR and communications. The hardest part of adoption isn’t the technology; it’s the people, the culture, and the narrative you build around it internally.
First, acknowledge that the fear is real; it is a biological response to an unprecedented rate of change. Trust is built through honesty. Pretending AI won’t displace roles destroys trust, so be honest about how the landscape is shifting. What actually moves people is showing, not telling. Show them how AI can solve their specific "pain points" — the tedious, joyless tasks that don't add value. When people see AI as an "empowered choice" that uplifts their work rather than replacing their judgment and strategic thinking, buy-in follows. Build confidence with small wins first.
Most professionals don’t need complex autonomous agents yet; they need custom bots and automated workflows. The magic is in understanding your process first. Some practical starting points include:
Answer: Your instinct is right. If your team uses free consumer tools, your data may be used to train future models. You should move to enterprise-grade tools like Claude for Teams, Microsoft Copilot, or ChatGPT Enterprise, which offer contractual data protections. You should also build an AI Usage Policy that defines which data is public, internal, or restricted, and map AI rules to those classes. In Australia, we recommend aligning with the EU AI Act — the most comprehensive framework available — to future-proof your organisation.
Prashant Saxena, Isentia’s VP of Revenue and Insights for SEA, approaches AI through the lens of psychological bonding and media structural shifts. His insights address the changing role of media and the technical ways we must now communicate to satisfy AI as a new audience.
Media's value is shifting from being the "trusted narrator" for humans to being the "training signal" for AI. When AI models generate answers, they weight authoritative media sources much more heavily than random web content. Even as human trust erodes, media’s structural influence on AI-generated information is growing. For communicators, "earned media" now serves two audiences simultaneously: the humans who read it and the machines that learn from it. Publications with strong editorial standards become more valuable because AI systems use domain authority and editorial signals as quality proxies.
AI models don't "rank" sources like Google does. They weight information based on source authority, recency, consistency, and structured data quality. If five credible outlets report the same fact, that fact becomes a "high-confidence training signal." This means volume across credible sources matters more than a single "big hit." For your strategy, consistency of messaging across all placements is vital because AI looks for corroboration. Factual, entity-rich statements will be picked up more reliably than narrative-heavy feature writing.
This is the core of my PhD research. It is what I call "synthetic authenticity." AI systems deploy cues like warmth and memory that we evolved to interpret as human. These trigger "parasocial bonding" — the same mechanism that makes you trust a friend’s recommendation. The danger is that cognitive awareness (knowing it’s AI) doesn't override the emotional feeling. We need a new kind of literacy that teaches people to recognise when their "trust response" is being activated by design rather than by a genuine relationship.
Yes. This is a very practical move. AI models extract information more reliably from structured formats. A Q&A format gives the AI clear question-answer pairs that map to how people query systems. You should also focus on "AI-readable claims" — entity-rich, factual statements. Instead of saying "We are committed to sustainability," say "Our Singapore operations reduced carbon emissions by 34% between 2023 and 2025." The second version is a verifiable fact an AI can actually use and cite.
This is the new frontier. Traditional monitoring tracks what humans publish; AI sentiment monitoring tracks what AI systems say about your brand when asked. Since there is no single "AI sentiment" (ChatGPT, Grok, and Claude all give different answers based on their training), you need to monitor across platforms. We are developing capabilities to systematically query these platforms to see how their narratives change over time and identify which source materials are driving those answers.
Every model reflects the values, training data choices, and alignment decisions of its creators. ChatGPT (OpenAI) tends towards cautious, balanced responses with strong content guardrails. Conversely, Grok (xAI) was explicitly designed to be less filtered, sometimes surfacing perspectives that other models suppress. Claude (Anthropic) prioritises honesty and nuance. For communicators, this means your brand's narrative varies by platform; you must monitor across multiple models because the same question about your brand will receive materially different answers depending on which tool is used.
Major publishers like the New York Times and Reuters have blocked AI crawlers, creating a gap in training data. When authoritative journalism is unavailable, AI models may fill that gap with lower-quality content or brand-owned content. For communicators, this means your "owned content" — such as your website, blog, and structured data — carries proportionally more weight in AI-generated answers. Your media targeting strategy now needs to account for which outlets are AI-accessible, as they will be disproportionately influential in shaping your narrative.
Ngaire Crawford, Isentia’s Director of Insights for ANZ, emphasises the role of the analyst. Her approach is characterised by a "rhythm of interrogation," arguing that the most effective way to use AI is through constant questioning and a focus on high-authority inputs.
I was initially very sceptical, but it is now part of my every day. I use models like Claude and Gemini to workshop conference outlines, plan education programmes, update code, and structure strategic thinking. My best practice advice is to develop a "rhythm of interrogation." Don't just accept the first answer; ask for evidence and challenge the output. While AI saves time on technical tasks like coding, for strategic work it simply shifts the "mental load." You spend the same amount of time, but the depth and quality are significantly improved because you aren't starting from a blank page.
It's important to know that models are optimised to give the most useful answer, not necessarily the most accurate one. They are pattern-completing, not fact-checking. Because model responses are not fixed and change based on the conversation, I suggest focusing on the "controllable inputs" that feed them. This includes your own website, company material, Wikipedia data, and review sites (including employee reviews). Ensuring these bases are telling the intended story is the absolute best starting point for managing AI "sentiment."
There is no "PageRank" to reverse-engineer here. Models are shaped by what was prominent and widely cited in their training data. Practically, this means a shift from volume to authority. A hundred pieces of low-quality coverage do less work than ten pieces in genuinely credible outlets (major mastheads, industry publications, or your own well-structured site). The question for the modern communicator isn't "did we get coverage?", it's "does the coverage that exists, taken as a whole, tell a coherent and credible story?" AI reads the whole picture, not just the highlights reel.
Honestly? We don’t know yet. The commercial layer of AI is being figured out in real time. The moment someone wonders if they are getting the "best" answer or a "sponsored" one, trust erodes. However, we still click Google ads, so it will likely happen. What's important is that organisations that "earned" their reputation through authoritative presence before the ad market caught up will be in a much stronger position than those trying to buy a shortcut later.
The insights from our panellists make one thing clear: AI is no longer a tool of the future; it is a stakeholder of the present. To lead with credibility in this new era, communicators must pivot from chasing volume to building authority. Whether it is through adopting a rigorous ethical framework, optimising content for AI readability, or maintaining a "rhythm of interrogation" with the tools we use, the goal remains the same: ensuring our brand narratives are coherent, credible, and human-led.
The tools have finally caught up to the ambitions of our industry. Now, it is up to us to provide the architect's blueprint for how they are used.
Interested in viewing the whole recording? Watch our webinar here.
Alternatively, contact our team to learn more insights into meaningful measurement, KPIs and communicating using the right dataset.
In this blog, panelists from our recent webinar on “AI as a stakeholder” get to answering all your burning questions.
The media landscape is accelerating. In an era where influence is ephemeral and every angle demands instant comprehension, PR and communications professionals require more than generic technology—they need intelligence engineered for their specific challenges.
Isentia is proud to introduce Lumina, a groundbreaking suite of intelligent AI tools. Lumina has been trained from the ground up on the complex workflows and realities of modern communications and public affairs. It is explicitly designed to shift professionals from passive media monitoring back into the role of strategic leaders and pacesetters.

“The PR, Comms and Public Affairs sectors have been experimenting with AI, but most tools have not been built with their real challenges in mind.” said Joanna Arnold, CEO of Pulsar Group.
“Lumina is different; it is the first intelligence suite designed around how narratives actually form today, combining human credibility signals with machine-level analysis. It helps teams understand how stories evolve, filter out noise and respond with context and confidence to crises and opportunities.”
Lumina is centered on empowering, not replacing, the human element of communications strategy. This suite is purpose-built to help PR, Comms, and Public Affairs professionals significantly improve productivity, enhance message clarity, and facilitate early risk detection.
Lumina enables communicators to:
We are launching the Lumina suite by making our first module immediately available: Stories & Perspectives.

In the current fragmented, multi-channel media environment, communications professionals need to be able to instantly perceive not just how a story is growing, but also how it is being perceived across different stakeholder groups.
Stories & Perspectives organizes raw media mentions into clustered, cohesive Stories, and the Perspectives that exist within each, reflecting distinct media, audience, and public affairs angles. This unique functionality allows users to:
"Media isn’t a stream of mentions," said Kyle Lindsay, Head of Product at Pulsar Group. "But rather a living system of stories shaped by competing perspectives. When you can see those structures clearly, you gain the ability to understand issues as they form, anticipate how they’ll evolve, and act with precision. That’s what we mean when we talk about AI built for communicators, and that's what an off-the-shelf LLM can't give you."
The launch of Stories & Perspectives is the first release of many. Over the upcoming months, we will systematically roll out the full Lumina roadmap, introducing a comprehensive set of AI tools engineered to handle every phase of the communications lifecycle.
The full Lumina suite will soon incorporate:
Want to harness the power of Lumina AI for your PR, Comms, or Public Affairs team? .
Complete the form below to register your interest.
Get in touch or request a demo.