Share

Similar articles

object(WP_Post)#8987 (24) { ["ID"]=> int(45283) ["post_author"]=> string(2) "75" ["post_date"]=> string(19) "2026-03-24 06:07:28" ["post_date_gmt"]=> string(19) "2026-03-24 06:07:28" ["post_content"]=> string(20832) "

AI has become a powerful stakeholder in its own right — from being just another ‘technological advancement’ to an active contributor to modern-day communications, that’s massively changed the media landscape today.

Isentia hosted an essential conversation with Lisa Main (Director, Main Bureau), Dr Nici Sweaney (Founder and Director, AI Her Way), Prashant Saxena (Isentia’s VP of Revenue and Insights, SEA), and Ngaire Crawford (Isentia’s Director of Insights, ANZ). Together, they explored how AI reshapes the world of communications and corporate affairs all the while figuring out how to manage and strategically engage with it.

In this session, we covered:

  • Understanding AI’s behaviour and influence as a digital stakeholder.
  • Navigating the unique challenges and opportunities AI presents as a new "audience."
  • The long-term impact of AI and LLMs on the industries central to modern communicators.

Following the webinar, our panellists took the time to answer the most insightful questions from our attendees that we couldn't get to during the live session. Here are their expert perspectives.

Ethical governance and human-centric adoption: perspectives from Dr Nici Sweaney

As the Founder and Director of AI Her Way, Dr Nici Sweaney advocates for a strategic approach to AI that prioritises human intent over technical capability. The questions directed to her focused on the ethical foundations of AI, how organisations should structure their internal AI strategy, and practical ways to start using agents today.

Q: Could you please shed a little light on what ethical AI in your language means?

Ethical AI, to me, is about two things working together: avoiding harm and actively doing good. It’s not just “don’t break anything” — but genuinely asking, does this create value for the business, for the people using it, and for the broader world? Transparency, equity, and accountability are the pillars. Transparency means being honest with your audience and colleagues about when AI is involved. Equity means asking who this helps and who it leaves behind, as AI scales existing biases. Finally, accountability means humans stay in the loop. AI should inform decisions, not make them. When the "why" is clear — like saving a team time to focus on strategy — you are using AI with integrity.

Q: Should AI adoption be owned by IT or Internal Communications? I see staff intranets being overtaken by AI and this has implications for how employees are communicated with.

My answer is probably not what IT wants to hear. AI is part of your infrastructure, so IT must be involved for security and guardrails. However, the strategy behind adoption is fundamentally a human problem, not a technical one. I advocate for a cross-functional "coalition" that brings IT, HR, communications, and strategy to the same table. If you create a dedicated AI leadership role, that person should sit closer to human-centric functions like HR and communications. The hardest part of adoption isn’t the technology; it’s the people, the culture, and the narrative you build around it internally.

Q: What are the most effective ways to address colleagues' concerns about using AI agents in the workplace — particularly around trust, accuracy, and job security?

First, acknowledge that the fear is real; it is a biological response to an unprecedented rate of change. Trust is built through honesty. Pretending AI won’t displace roles destroys trust, so be honest about how the landscape is shifting. What actually moves people is showing, not telling. Show them how AI can solve their specific "pain points" — the tedious, joyless tasks that don't add value. When people see AI as an "empowered choice" that uplifts their work rather than replacing their judgment and strategic thinking, buy-in follows. Build confidence with small wins first.

Q: What are some simple AI agents that you would recommend communications professionals experiment with setting up?

Most professionals don’t need complex autonomous agents yet; they need custom bots and automated workflows. The magic is in understanding your process first. Some practical starting points include:

  • Daily Briefings: A task that pulls from your calendar, email, and news to deliver a summary each morning.
  • Meeting Prep: Automated notes that pull context and past correspondence before a meeting, and transcription tools that turn recordings into action items afterwards.
  • Content Repurposing: A custom bot trained on your "voice" that can turn one talk or newsletter into 15+ social media assets and blog snippets.
Q: Our team members are using AI daily, but I know this is not safe as data is transferred back and forth. Should we create rules and ask people to sign IP protection?

Answer: Your instinct is right. If your team uses free consumer tools, your data may be used to train future models. You should move to enterprise-grade tools like Claude for Teams, Microsoft Copilot, or ChatGPT Enterprise, which offer contractual data protections. You should also build an AI Usage Policy that defines which data is public, internal, or restricted, and map AI rules to those classes. In Australia, we recommend aligning with the EU AI Act — the most comprehensive framework available — to future-proof your organisation.

Synthetic authenticity and the new media ecosystem: Perspectives from Prashant Saxena

Prashant Saxena, Isentia’s VP of Revenue and Insights for SEA, approaches AI through the lens of psychological bonding and media structural shifts. His insights address the changing role of media and the technical ways we must now communicate to satisfy AI as a new audience.

Q: Given that trust in media is dropping and media themselves are using AI more, what is the role or value media can have now?

Media's value is shifting from being the "trusted narrator" for humans to being the "training signal" for AI. When AI models generate answers, they weight authoritative media sources much more heavily than random web content. Even as human trust erodes, media’s structural influence on AI-generated information is growing. For communicators, "earned media" now serves two audiences simultaneously: the humans who read it and the machines that learn from it. Publications with strong editorial standards become more valuable because AI systems use domain authority and editorial signals as quality proxies.

Q: How does AI rank or prioritise its sources and how do you see this shaping the earned media strategy for brands?

AI models don't "rank" sources like Google does. They weight information based on source authority, recency, consistency, and structured data quality. If five credible outlets report the same fact, that fact becomes a "high-confidence training signal." This means volume across credible sources matters more than a single "big hit." For your strategy, consistency of messaging across all placements is vital because AI looks for corroboration. Factual, entity-rich statements will be picked up more reliably than narrative-heavy feature writing.

Q: With the question of trust — where does the psychology come into it when AI uses a cute nickname or 'remembers' your day? Is it harder to remain dispassionate?

This is the core of my PhD research. It is what I call "synthetic authenticity." AI systems deploy cues like warmth and memory that we evolved to interpret as human. These trigger "parasocial bonding" — the same mechanism that makes you trust a friend’s recommendation. The danger is that cognitive awareness (knowing it’s AI) doesn't override the emotional feeling. We need a new kind of literacy that teaches people to recognise when their "trust response" is being activated by design rather than by a genuine relationship.

Q: Should we be changing the format of communications to cater for AI as an audience, such as media releases in Q&A format?

Yes. This is a very practical move. AI models extract information more reliably from structured formats. A Q&A format gives the AI clear question-answer pairs that map to how people query systems. You should also focus on "AI-readable claims" — entity-rich, factual statements. Instead of saying "We are committed to sustainability," say "Our Singapore operations reduced carbon emissions by 34% between 2023 and 2025." The second version is a verifiable fact an AI can actually use and cite.

Q: PR professionals traditionally monitor media coverage through agencies like Isentia to gauge sentiment. With AI as a stakeholder, how do we monitor 'its sentiment'?

This is the new frontier. Traditional monitoring tracks what humans publish; AI sentiment monitoring tracks what AI systems say about your brand when asked. Since there is no single "AI sentiment" (ChatGPT, Grok, and Claude all give different answers based on their training), you need to monitor across platforms. We are developing capabilities to systematically query these platforms to see how their narratives change over time and identify which source materials are driving those answers.

Q: Regarding ethics and agendas in AI learning — what are the differences between models like ChatGPT and Grok, and how does this affect our brand narrative?

Every model reflects the values, training data choices, and alignment decisions of its creators. ChatGPT (OpenAI) tends towards cautious, balanced responses with strong content guardrails. Conversely, Grok (xAI) was explicitly designed to be less filtered, sometimes surfacing perspectives that other models suppress. Claude (Anthropic) prioritises honesty and nuance. For communicators, this means your brand's narrative varies by platform; you must monitor across multiple models because the same question about your brand will receive materially different answers depending on which tool is used.

Q: With many major news organisations blocking AI crawlers, how should we navigate content creation to ensure we still influence AI-generated answers?

Major publishers like the New York Times and Reuters have blocked AI crawlers, creating a gap in training data. When authoritative journalism is unavailable, AI models may fill that gap with lower-quality content or brand-owned content. For communicators, this means your "owned content" — such as your website, blog, and structured data — carries proportionally more weight in AI-generated answers. Your media targeting strategy now needs to account for which outlets are AI-accessible, as they will be disproportionately influential in shaping your narrative.

Analytical interrogation and the search for authority: Perspectives from Ngaire Crawford

Ngaire Crawford, Isentia’s Director of Insights for ANZ, emphasises the role of the analyst. Her approach is characterised by a "rhythm of interrogation," arguing that the most effective way to use AI is through constant questioning and a focus on high-authority inputs.

Q: Is AI already part of your daily work or habit? If so, how are you using it and what are your best practices?

I was initially very sceptical, but it is now part of my every day. I use models like Claude and Gemini to workshop conference outlines, plan education programmes, update code, and structure strategic thinking. My best practice advice is to develop a "rhythm of interrogation." Don't just accept the first answer; ask for evidence and challenge the output. While AI saves time on technical tasks like coding, for strategic work it simply shifts the "mental load." You spend the same amount of time, but the depth and quality are significantly improved because you aren't starting from a blank page.

Q. PR professionals traditionally monitor media coverage through agencies like Isentia to guage what stakeholders think about a brand. How do we monitor 'AI sentiment' and the information that feeds these models?

It's important to know that models are optimised to give the most useful answer, not necessarily the most accurate one. They are pattern-completing, not fact-checking. Because model responses are not fixed and change based on the conversation, I suggest focusing on the "controllable inputs" that feed them. This includes your own website, company material, Wikipedia data, and review sites (including employee reviews). Ensuring these bases are telling the intended story is the absolute best starting point for managing AI "sentiment."

Q: How does AI prioritise its sources and how does this shape earned media strategy?

There is no "PageRank" to reverse-engineer here. Models are shaped by what was prominent and widely cited in their training data. Practically, this means a shift from volume to authority. A hundred pieces of low-quality coverage do less work than ten pieces in genuinely credible outlets (major mastheads, industry publications, or your own well-structured site). The question for the modern communicator isn't "did we get coverage?", it's "does the coverage that exists, taken as a whole, tell a coherent and credible story?" AI reads the whole picture, not just the highlights reel.

Q: Now that OpenAI is opening up advertising, how much will it cost for a sentiment boost?

Honestly? We don’t know yet. The commercial layer of AI is being figured out in real time. The moment someone wonders if they are getting the "best" answer or a "sponsored" one, trust erodes. However, we still click Google ads, so it will likely happen. What's important is that organisations that "earned" their reputation through authoritative presence before the ad market caught up will be in a much stronger position than those trying to buy a shortcut later.

The path forward for the modern communicator

The insights from our panellists make one thing clear: AI is no longer a tool of the future; it is a stakeholder of the present. To lead with credibility in this new era, communicators must pivot from chasing volume to building authority. Whether it is through adopting a rigorous ethical framework, optimising content for AI readability, or maintaining a "rhythm of interrogation" with the tools we use, the goal remains the same: ensuring our brand narratives are coherent, credible, and human-led.

The tools have finally caught up to the ambitions of our industry. Now, it is up to us to provide the architect's blueprint for how they are used.


Interested in viewing the whole recording? Watch our webinar here.

Alternatively, contact our team to learn more insights into meaningful measurement, KPIs and communicating using the right dataset.

" ["post_title"]=> string(61) "Answering your questions from the AI as a stakeholder webinar" ["post_excerpt"]=> string(118) "In this blog, panelists from our recent webinar on "AI as a stakeholder" get to answering all your burning questions. " ["post_status"]=> string(7) "publish" ["comment_status"]=> string(4) "open" ["ping_status"]=> string(4) "open" ["post_password"]=> string(0) "" ["post_name"]=> string(61) "answering-your-questions-from-the-ai-as-a-stakeholder-webinar" ["to_ping"]=> string(0) "" ["pinged"]=> string(0) "" ["post_modified"]=> string(19) "2026-03-24 06:10:12" ["post_modified_gmt"]=> string(19) "2026-03-24 06:10:12" ["post_content_filtered"]=> string(0) "" ["post_parent"]=> int(0) ["guid"]=> string(32) "https://www.isentia.com/?p=45283" ["menu_order"]=> int(0) ["post_type"]=> string(4) "post" ["post_mime_type"]=> string(0) "" ["comment_count"]=> string(1) "0" ["filter"]=> string(3) "raw" }
Blog
Answering your questions from the AI as a stakeholder webinar

In this blog, panelists from our recent webinar on “AI as a stakeholder” get to answering all your burning questions.

object(WP_Post)#11761 (24) { ["ID"]=> int(43742) ["post_author"]=> string(2) "75" ["post_date"]=> string(19) "2025-12-08 17:11:34" ["post_date_gmt"]=> string(19) "2025-12-08 17:11:34" ["post_content"]=> string(9544) "

The media landscape is accelerating. In an era where influence is ephemeral and every angle demands instant comprehension, PR and communications professionals require more than generic technology—they need intelligence engineered for their specific challenges.

Isentia is proud to introduce Lumina, a groundbreaking suite of intelligent AI tools. Lumina has been trained from the ground up on the complex workflows and realities of modern communications and public affairs. It is explicitly designed to shift professionals from passive media monitoring back into the role of strategic leaders and pacesetters. 

“The PR, Comms and Public Affairs sectors have been experimenting with AI, but most tools have not been built with their real challenges in mind.” said Joanna Arnold, CEO of Pulsar Group

“Lumina is different; it is the first intelligence suite designed around how narratives actually form today, combining human credibility signals with machine-level analysis. It helps teams understand how stories evolve, filter out noise and respond with context and confidence to crises and opportunities.”

Setting a new standard for PR intelligence

Lumina is centered on empowering, not replacing, the human element of communications strategy. This suite is purpose-built to help PR, Comms, and Public Affairs professionals significantly improve productivity, enhance message clarity, and facilitate early risk detection.

Lumina enables communicators to:

  • Understand & Interpret: Move beyond basic alerts to strategically map the trajectory and spread of narrative evolution.
  • Focus & Personalise: Achieve the clarity necessary to execute strategic action before critical moments pass.
  • Execute & Monitor: Rapidly deploy strategy firmly rooted in real-time, actionable insight.

Get a demo today: Stories & Perspectives module

We are launching the Lumina suite by making our first module immediately available: Stories & Perspectives.

In the current fragmented, multi-channel media environment, communications professionals need to be able to instantly perceive not just how a story is growing, but also how it is being perceived across different stakeholder groups.

Stories & Perspectives organizes raw media mentions into clustered, cohesive Stories, and the Perspectives that exist within each, reflecting distinct media, audience, and public affairs angles. This unique functionality allows users to:

  • Rise above the noise: Instantly identify which high-level topics are gaining momentum or fading from attention.
  • Get to the detail, fast: Uncover the influential voices, niche communities, and specific channels actively shaping the narrative.
  • Catch the pivot point: Precisely identify the moment a story shifts—from a strategic opportunity to a reputation risk—or when a new key opinion former begins guiding the conversation.

"Media isn’t a stream of mentions," said Kyle Lindsay, Head of Product at Pulsar Group. "But rather a living system of stories shaped by competing perspectives. When you can see those structures clearly, you gain the ability to understand issues as they form, anticipate how they’ll evolve, and act with precision. That’s what we mean when we talk about AI built for communicators, and that's what an off-the-shelf LLM can't give you."

The Lumina Roadmap: AI tools for the future of comms

The launch of Stories & Perspectives is the first release of many. Over the upcoming months, we will systematically roll out the full Lumina roadmap, introducing a comprehensive set of AI tools engineered to handle every phase of the communications lifecycle.

The full Lumina suite will soon incorporate:

  • Curated media summaries: AI-driven daily summaries customized specifically to the priorities of senior leadership, highlighting only the most relevant stories.
  • Reputation analysis: Advanced measurement tracking how critical themes like ethics, innovation, and leadership are statistically shaping corporate perception.
  • Press release & media relations assistant: Tools designed to accelerate content creation and craft hyper-focused, personalized pitches that reach the precise contacts faster.
  • Predictive intelligence layer: Technology engineered to track and anticipate story momentum and strategic change before the window of opportunity closes.
  • Intelligent agents: Background agents continuously scanning all media channels for emerging key spokespeople and previously undetected reputation risks.
  • Enhanced audio, broadcast & crisis detection: Complete, real-time oversight of all channels—including audio and broadcast—enabling rapid context building and optimal crisis response delivery.


Want to harness the power of Lumina AI for your PR, Comms, or Public Affairs team? .

Complete the form below to register your interest.

" ["post_title"]=> string(79) "Announcing Lumina: The purpose-built AI suite for PR, Comms, and Public Affairs" ["post_excerpt"]=> string(129) "An intelligent suite of AI tools trained on the language, workflows, and realities of modern public relations and communications." ["post_status"]=> string(7) "publish" ["comment_status"]=> string(4) "open" ["ping_status"]=> string(4) "open" ["post_password"]=> string(0) "" ["post_name"]=> string(76) "announcing-lumina-the-purpose-built-ai-suite-for-pr-comms-and-public-affairs" ["to_ping"]=> string(0) "" ["pinged"]=> string(0) "" ["post_modified"]=> string(19) "2025-12-09 09:39:52" ["post_modified_gmt"]=> string(19) "2025-12-09 09:39:52" ["post_content_filtered"]=> string(0) "" ["post_parent"]=> int(0) ["guid"]=> string(32) "https://www.isentia.com/?p=43742" ["menu_order"]=> int(0) ["post_type"]=> string(4) "post" ["post_mime_type"]=> string(0) "" ["comment_count"]=> string(1) "0" ["filter"]=> string(3) "raw" }
Blog
Announcing Lumina: The purpose-built AI suite for PR, Comms, and Public Affairs

An intelligent suite of AI tools trained on the language, workflows, and realities of modern public relations and communications.

Ready to get started?

Get in touch or request a demo.