Bring on the AI overlords: from a content marketer
Artificial intelligence (AI). Just saying the words invokes visions of an apocalyptic future teeming with deadly machines like The Terminator or even software like The Matrix’s Agent Smith. At least that’s the dystopia the scaremongers are peddling. If the latest hype is anything to go by, AI will not only change life on earth as we know it, it will probably take your job too.
As an editor, content marketer and millennial, it appears my head is on the chopping block. Gartner predicts that by 2018, 20 per cent of business content will be authored by machines, and many are speculating that journalists will cease to exist. Add Elon Musk comparing AI to a demon, and even I’m spooked.
But I won’t pack up my desk just yet. Here’s why.
We’re surrounded by AI
Let’s be honest: this is nothing new. Artificial intelligence, machine learning and automation have been around for quite a while, and we’ve all been targeted by Facebook’s AI-applied targeted advertising and subject to Google AdWords’ AI-powered, automated bidding for years.
Your top picks on Netflix? AI technology fuels its recommendation engine. Apple’s personal assistant, Siri? She’s machine learning to better predict, understand and answer your questions. Google? Depends on AI to rank your search results.
But the machines haven’t taken over yet. Despite it trickling into everyday life, AI is still in its infancy. Instead of conjuring images of alien robots, we should really think of the technology as a baby Bicentennial Man in nappies – waiting for us to teach it.
AI is growing up fast
To be useful for content marketing, AI needs a mammoth amount of fresh, structured data.
Its power lies in its ability to analyse large data sets to reveal patterns and trends. Feed it enough high-quality data and it will be able to predict share prices or a human’s lifespan and, in some cases, even write content.
Natural language generation (NLG) is a type of AI software capable of producing coherent, readable text. NLG robo-journalists are already creating basic sports content and corporate earnings reports. But, as smart as it is, NLG isn’t truly independent – it needs very specific data sets and templates before it can write, and it can’t create anything genuinely new.
Still, that doesn’t mean we can’t use the technology. In the realm of content marketing, AI can gather, sort and make sense of oceans of data – something the industry is swimming in.
AI: Spotting trends, making predictions
Ask any marketer and they’ll tell you they’re ‘data driven’.
Sure, we’re data driven. We look at engagement metrics to tell us what’s working, and change things accordingly to make them work better and inform future decisions. But it’s generally retrospective.
A lot of what we do is still based on instinct. We still speak to real people. We still search online to understand what people are asking. We still study search volumes.
What we need is the ability to predict something before it needs to be changed. This is where the opportunity for AI is in content marketing right now.
Exciting stuff for a content marketer working in a media and data intelligence business. We’re already using our own AI to process seven million news items every day, at a rate of 234 stories per second.
With that much data, our software can make strong recommendations about what type of content we should be creating, and for whom. As it evolves (and learns), it should be able to spot trends and patterns early, informing communications strategies and helping businesses to maximise opportunity and minimise risk.
Humans and AI, living together
AI and predictive analytics will help content marketers understand who they should be talking to and what they should be saying, but it’s up to us to create the content.
AI relies on human data and intelligence to function and learn. At least for now, this is where its limitations lie.
Humans are still needed to create original work that connects with its audience at an emotional level. To completely replace a writer or content marketer, AI would need to have an opinion, think abstractly, be curious and show emotion.
So, while your inbox might be full of propaganda alluding to our impending cyberdoom, we’re not there yet.
However, we shouldn’t be naïve, as the way we work is being transformed. To stay in the game, we should spearhead the change rather than hiding in the corner.
I for one welcome working with our new robot overlords, and I urge you all to join me. As the machine said, “Come with me if you want to live.”
Disclaimer: This article was not written by a robot.
Loren is an experienced marketing professional who translates data and insights using Isentia solutions into trends and research, bringing clients closer to the benefits of audience intelligence. Loren thrives on introducing the groundbreaking ways in which data and insights can help a brand or organisation, enabling them to exceed their strategic objectives and goals.
The immediate challenge is not killer robots, its job replacement. If individuals are automated out of jobs, the future for society is bleak.
Computers can already take orders, fold clothes and even drive cars, but where to from here?
The robots are coming. Although often spoken of in future tense, the truth is machine learning is well and truly here. Without realising, consumers interact with ‘smart’ technology at almost every touch point; from robotic vacuums to facial recognition technology, artificial intelligence (AI) is helping to complete tasks faster, cheaper and – sometimes - more effectively than ever before.
In an economy that’s driven by speed and efficiency, it should come as no surprise that a computer’s ability to communicate at a trillion bits per second is favoured above the human capability of about 10 bits.
McKinsey recently reported that 40 per cent of work tasks can be automated using existing technology, prompting everyone from factory workers to lawyers and accountants to consider the threat of being replaced by robots as not just inevitable, but imminent.
For technologists, we are witnessing first-hand how this emerging field is transforming the companies we work for.
In my work at Isentia, we use machine learning to process seven million news items each day. Not long ago this was a task relegated performed solely by humans with the mind-numbing task of flipping through newspapers in search of stories that might relate to a client.
We have a duty to empower those around us to learn everything they can about what their job may evolve into in order to become the very best man-machine partner possible.
Today, machines trawl video, audio and digital content across over 5,500 new sites at a rate of 234 stories per second and present meaningful summaries to clients in real-time.
Whether a story breaks on Twitter and then spills across news platforms and onto television and radio, machine learning can track and analyse how a story evolves with 99 per cent accuracy.
While AI is revolutionising the way that we work, the impact is far greater for those in the tech industry.
In our mission to develop software that can learn complex problems without needing to be taught how, the success of the AI industry ultimately comes down to technology professionals: our ability to automate, and the pace at which we expand the field of machine learning.
With an annual growth rate of 19.7 per cent percent (predicted to be worth $15.3 billion by 2019), it’s safe to say our foot is well and truly on the pedal. While this relies greatly on our technical capabilities, it is something that challenges many of us ethically: what set of values should AI be aligned with?
Two of the greatest technologists of our times, Elon Musk and Stephen Hawking, have spoken about both the potential benefit and the harm that an AI arms race could deliver. An eradication of disease is not unfathomable, but nor is a threat to humanity. They hold grave concerns as to whether or not robots can be controlled against misuse or malfunction.
While thought provoking, the immediate challenge is not killer robots, it’s job replacement. Employment may not seem like an ethical problem, but if individuals are automated out of jobs, the future for society is bleak.
While the phrase ‘Thank God it’s Friday’ has forged its way into the 9-to-5 vernacular, for most people, jobs create a huge sense of personal and professional satisfaction… not to mention a means to pay bills.
An apocalypse might be somewhat melodramatic, however I do agree that it is important to consider just how closely we should merge biological and digital intelligence.
Computers can already take orders, fold clothes and even drive cars, but where to from here? It’s both exciting and terrifying. The last time we experienced a revolution like this was in the early 1900s when cars, telephones and the airplane all emerged at once.
Contrary to the hype, there lies an enormous opportunity for humans to work with artificial intelligence, not be replaced by it.
Make no mistake: at some level every job can be carried out by a robot. But there are certain jobs, particularly in technology, that require decision making, planning or coding software.
While computers do a brilliant job of executing well-defined activities - such as telling us the fastest route to get from home to work - it is safe to say that humans are an essential component of goal setting, interpreting results, humour, sarcasm and implementing common sense checks.
The most difficult jobs to automate are those that involve managing and developing people. While in this industry most of our jobs are safe (for now), we should heed the advice of Musk and Hawkings and protect those outside our field by proceeding with caution. How then to facilitate human and robots working together harmoniously without the workforce morphing into cyborgs? The secret is to not sail out farther we can row back.
As technologists, we also have a duty to empower those around us to learn everything they can about what their job may evolve into in order to become the very best man-machine partner possible. It's the best, and most ethical, way to prepare for the inevitable advent of AI.
First publish in CIO New Zealand
Andrea Walsh, CIO
"
["post_title"]=>
string(39) "It's time to slow down the AI arms race"
["post_excerpt"]=>
string(92) "Computers can already take orders, fold clothes and even drive cars, but where to from here?"
["post_status"]=>
string(7) "publish"
["comment_status"]=>
string(4) "open"
["ping_status"]=>
string(4) "open"
["post_password"]=>
string(0) ""
["post_name"]=>
string(38) "its-time-to-slow-down-the-ai-arms-race"
["to_ping"]=>
string(0) ""
["pinged"]=>
string(0) ""
["post_modified"]=>
string(19) "2019-06-26 01:01:55"
["post_modified_gmt"]=>
string(19) "2019-06-26 01:01:55"
["post_content_filtered"]=>
string(0) ""
["post_parent"]=>
int(0)
["guid"]=>
string(43) "https://isentiastaging.wpengine.com/?p=1829"
["menu_order"]=>
int(0)
["post_type"]=>
string(4) "post"
["post_mime_type"]=>
string(0) ""
["comment_count"]=>
string(1) "0"
["filter"]=>
string(3) "raw"
}
Blog
It’s time to slow down the AI arms race
Computers can already take orders, fold clothes and even drive cars, but where to from here?
AI has become a powerful stakeholder in its own right — from being just another ‘technological advancement’ to an active contributor to modern-day communications, that’s massively changed the media landscape today.
Isentia hosted an essential conversation with Lisa Main (Director, Main Bureau), Dr Nici Sweaney (Founder and Director, AI Her Way), Prashant Saxena (Isentia’s VP of Revenue and Insights, SEA), and Ngaire Crawford (Isentia’s Director of Insights, ANZ). Together, they explored how AI reshapes the world of communications and corporate affairs all the while figuring out how to manage and strategically engage with it.
In this session, we covered:
Understanding AI’s behaviour and influence as a digital stakeholder.
Navigating the unique challenges and opportunities AI presents as a new "audience."
The long-term impact of AI and LLMs on the industries central to modern communicators.
Following the webinar, our panellists took the time to answer the most insightful questions from our attendees that we couldn't get to during the live session. Here are their expert perspectives.
Ethical governance and human-centric adoption: perspectives from Dr Nici Sweaney
As the Founder and Director of AI Her Way, Dr Nici Sweaney advocates for a strategic approach to AI that prioritises human intent over technical capability. The questions directed to her focused on the ethical foundations of AI, how organisations should structure their internal AI strategy, and practical ways to start using agents today.
Q: Could you please shed a little light on what ethical AI in your language means?
Ethical AI, to me, is about two things working together: avoiding harm and actively doing good. It’s not just “don’t break anything” — but genuinely asking, does this create value for the business, for the people using it, and for the broader world? Transparency, equity, and accountability are the pillars. Transparency means being honest with your audience and colleagues about when AI is involved. Equity means asking who this helps and who it leaves behind, as AI scales existing biases. Finally, accountability means humans stay in the loop. AI should inform decisions, not make them. When the "why" is clear — like saving a team time to focus on strategy — you are using AI with integrity.
Q: Should AI adoption be owned by IT or Internal Communications? I see staff intranets being overtaken by AI and this has implications for how employees are communicated with.
My answer is probably not what IT wants to hear. AI is part of your infrastructure, so IT must be involved for security and guardrails. However, the strategy behind adoption is fundamentally a human problem, not a technical one. I advocate for a cross-functional "coalition" that brings IT, HR, communications, and strategy to the same table. If you create a dedicated AI leadership role, that person should sit closer to human-centric functions like HR and communications. The hardest part of adoption isn’t the technology; it’s the people, the culture, and the narrative you build around it internally.
Q: What are the most effective ways to address colleagues' concerns about using AI agents in the workplace — particularly around trust, accuracy, and job security?
First, acknowledge that the fear is real; it is a biological response to an unprecedented rate of change. Trust is built through honesty. Pretending AI won’t displace roles destroys trust, so be honest about how the landscape is shifting. What actually moves people is showing, not telling. Show them how AI can solve their specific "pain points" — the tedious, joyless tasks that don't add value. When people see AI as an "empowered choice" that uplifts their work rather than replacing their judgment and strategic thinking, buy-in follows. Build confidence with small wins first.
Q: What are some simple AI agents that you would recommend communications professionals experiment with setting up?
Most professionals don’t need complex autonomous agents yet; they need custom bots and automated workflows. The magic is in understanding your process first. Some practical starting points include:
Daily Briefings: A task that pulls from your calendar, email, and news to deliver a summary each morning.
Meeting Prep: Automated notes that pull context and past correspondence before a meeting, and transcription tools that turn recordings into action items afterwards.
Content Repurposing: A custom bot trained on your "voice" that can turn one talk or newsletter into 15+ social media assets and blog snippets.
Q: Our team members are using AI daily, but I know this is not safe as data is transferred back and forth. Should we create rules and ask people to sign IP protection?
Answer: Your instinct is right. If your team uses free consumer tools, your data may be used to train future models. You should move to enterprise-grade tools like Claude for Teams, Microsoft Copilot, or ChatGPT Enterprise, which offer contractual data protections. You should also build an AI Usage Policy that defines which data is public, internal, or restricted, and map AI rules to those classes. In Australia, we recommend aligning with the EU AI Act — the most comprehensive framework available — to future-proof your organisation.
Synthetic authenticity and the new media ecosystem: Perspectives from Prashant Saxena
Prashant Saxena, Isentia’s VP of Revenue and Insights for SEA, approaches AI through the lens of psychological bonding and media structural shifts. His insights address the changing role of media and the technical ways we must now communicate to satisfy AI as a new audience.
Q: Given that trust in media is dropping and media themselves are using AI more, what is the role or value media can have now?
Media's value is shifting from being the "trusted narrator" for humans to being the "training signal" for AI. When AI models generate answers, they weight authoritative media sources much more heavily than random web content. Even as human trust erodes, media’s structural influence on AI-generated information is growing. For communicators, "earned media" now serves two audiences simultaneously: the humans who read it and the machines that learn from it. Publications with strong editorial standards become more valuable because AI systems use domain authority and editorial signals as quality proxies.
Q: How does AI rank or prioritise its sources and how do you see this shaping the earned media strategy for brands?
AI models don't "rank" sources like Google does. They weight information based on source authority, recency, consistency, and structured data quality. If five credible outlets report the same fact, that fact becomes a "high-confidence training signal." This means volume across credible sources matters more than a single "big hit." For your strategy, consistency of messaging across all placements is vital because AI looks for corroboration. Factual, entity-rich statements will be picked up more reliably than narrative-heavy feature writing.
Q: With the question of trust — where does the psychology come into it when AI uses a cute nickname or 'remembers' your day? Is it harder to remain dispassionate?
This is the core of my PhD research. It is what I call "synthetic authenticity." AI systems deploy cues like warmth and memory that we evolved to interpret as human. These trigger "parasocial bonding" — the same mechanism that makes you trust a friend’s recommendation. The danger is that cognitive awareness (knowing it’s AI) doesn't override the emotional feeling. We need a new kind of literacy that teaches people to recognise when their "trust response" is being activated by design rather than by a genuine relationship.
Q: Should we be changing the format of communications to cater for AI as an audience, such as media releases in Q&A format?
Yes. This is a very practical move. AI models extract information more reliably from structured formats. A Q&A format gives the AI clear question-answer pairs that map to how people query systems. You should also focus on "AI-readable claims" — entity-rich, factual statements. Instead of saying "We are committed to sustainability," say "Our Singapore operations reduced carbon emissions by 34% between 2023 and 2025." The second version is a verifiable fact an AI can actually use and cite.
Q: PR professionals traditionally monitor media coverage through agencies like Isentia to gauge sentiment. With AI as a stakeholder, how do we monitor 'its sentiment'?
This is the new frontier. Traditional monitoring tracks what humans publish; AI sentiment monitoring tracks what AI systems say about your brand when asked. Since there is no single "AI sentiment" (ChatGPT, Grok, and Claude all give different answers based on their training), you need to monitor across platforms. We are developing capabilities to systematically query these platforms to see how their narratives change over time and identify which source materials are driving those answers.
Q: Regarding ethics and agendas in AI learning — what are the differences between models like ChatGPT and Grok, and how does this affect our brand narrative?
Every model reflects the values, training data choices, and alignment decisions of its creators. ChatGPT (OpenAI) tends towards cautious, balanced responses with strong content guardrails. Conversely, Grok (xAI) was explicitly designed to be less filtered, sometimes surfacing perspectives that other models suppress. Claude (Anthropic) prioritises honesty and nuance. For communicators, this means your brand's narrative varies by platform; you must monitor across multiple models because the same question about your brand will receive materially different answers depending on which tool is used.
Q: With many major news organisations blocking AI crawlers, how should we navigate content creation to ensure we still influence AI-generated answers?
Major publishers like the New York Times and Reuters have blocked AI crawlers, creating a gap in training data. When authoritative journalism is unavailable, AI models may fill that gap with lower-quality content or brand-owned content. For communicators, this means your "owned content" — such as your website, blog, and structured data — carries proportionally more weight in AI-generated answers. Your media targeting strategy now needs to account for which outlets are AI-accessible, as they will be disproportionately influential in shaping your narrative.
Analytical interrogation and the search for authority: Perspectives from Ngaire Crawford
Ngaire Crawford, Isentia’s Director of Insights for ANZ, emphasises the role of the analyst. Her approach is characterised by a "rhythm of interrogation," arguing that the most effective way to use AI is through constant questioning and a focus on high-authority inputs.
Q: Is AI already part of your daily work or habit? If so, how are you using it and what are your best practices?
I was initially very sceptical, but it is now part of my every day. I use models like Claude and Gemini to workshop conference outlines, plan education programmes, update code, and structure strategic thinking. My best practice advice is to develop a "rhythm of interrogation." Don't just accept the first answer; ask for evidence and challenge the output. While AI saves time on technical tasks like coding, for strategic work it simply shifts the "mental load." You spend the same amount of time, but the depth and quality are significantly improved because you aren't starting from a blank page.
Q. PR professionals traditionally monitor media coverage through agencies like Isentia to guage what stakeholders think about a brand. How do we monitor 'AI sentiment' and the information that feeds these models?
It's important to know that models are optimised to give the most useful answer, not necessarily the most accurate one. They are pattern-completing, not fact-checking. Because model responses are not fixed and change based on the conversation, I suggest focusing on the "controllable inputs" that feed them. This includes your own website, company material, Wikipedia data, and review sites (including employee reviews). Ensuring these bases are telling the intended story is the absolute best starting point for managing AI "sentiment."
Q: How does AI prioritise its sources and how does this shape earned media strategy?
There is no "PageRank" to reverse-engineer here. Models are shaped by what was prominent and widely cited in their training data. Practically, this means a shift from volume to authority. A hundred pieces of low-quality coverage do less work than ten pieces in genuinely credible outlets (major mastheads, industry publications, or your own well-structured site). The question for the modern communicator isn't "did we get coverage?", it's "does the coverage that exists, taken as a whole, tell a coherent and credible story?" AI reads the whole picture, not just the highlights reel.
Q: Now that OpenAI is opening up advertising, how much will it cost for a sentiment boost?
Honestly? We don’t know yet. The commercial layer of AI is being figured out in real time. The moment someone wonders if they are getting the "best" answer or a "sponsored" one, trust erodes. However, we still click Google ads, so it will likely happen. What's important is that organisations that "earned" their reputation through authoritative presence before the ad market caught up will be in a much stronger position than those trying to buy a shortcut later.
The path forward for the modern communicator
The insights from our panellists make one thing clear: AI is no longer a tool of the future; it is a stakeholder of the present. To lead with credibility in this new era, communicators must pivot from chasing volume to building authority. Whether it is through adopting a rigorous ethical framework, optimising content for AI readability, or maintaining a "rhythm of interrogation" with the tools we use, the goal remains the same: ensuring our brand narratives are coherent, credible, and human-led.
The tools have finally caught up to the ambitions of our industry. Now, it is up to us to provide the architect's blueprint for how they are used.
Interested in viewing the whole recording? Watch our webinar here.
Alternatively, contact our team to learn more insights into meaningful measurement, KPIs and communicating using the right dataset.
"
["post_title"]=>
string(61) "Answering your questions from the AI as a stakeholder webinar"
["post_excerpt"]=>
string(118) "In this blog, panelists from our recent webinar on "AI as a stakeholder" get to answering all your burning questions. "
["post_status"]=>
string(7) "publish"
["comment_status"]=>
string(4) "open"
["ping_status"]=>
string(4) "open"
["post_password"]=>
string(0) ""
["post_name"]=>
string(61) "answering-your-questions-from-the-ai-as-a-stakeholder-webinar"
["to_ping"]=>
string(0) ""
["pinged"]=>
string(0) ""
["post_modified"]=>
string(19) "2026-03-24 06:10:12"
["post_modified_gmt"]=>
string(19) "2026-03-24 06:10:12"
["post_content_filtered"]=>
string(0) ""
["post_parent"]=>
int(0)
["guid"]=>
string(32) "https://www.isentia.com/?p=45283"
["menu_order"]=>
int(0)
["post_type"]=>
string(4) "post"
["post_mime_type"]=>
string(0) ""
["comment_count"]=>
string(1) "0"
["filter"]=>
string(3) "raw"
}
Blog
Answering your questions from the AI as a stakeholder webinar
In this blog, panelists from our recent webinar on “AI as a stakeholder” get to answering all your burning questions.
The media landscape is accelerating. In an era where influence is ephemeral and every angle demands instant comprehension, PR and communications professionals require more than generic technology—they need intelligence engineered for their specific challenges.
Isentia is proud to introduce Lumina, a groundbreaking suite of intelligent AI tools. Lumina has been trained from the ground up on the complex workflows and realities of modern communications and public affairs. It is explicitly designed to shift professionals from passive media monitoring back into the role of strategic leaders and pacesetters.
“The PR, Comms and Public Affairs sectors have been experimenting with AI, but most tools have not been built with their real challenges in mind.” said Joanna Arnold, CEO of Pulsar Group.
“Lumina is different; it is the first intelligence suite designed around how narratives actually form today, combining human credibility signals with machine-level analysis. It helps teams understand how stories evolve, filter out noise and respond with context and confidence to crises and opportunities.”
Setting a new standard for PR intelligence
Lumina is centered on empowering, not replacing, the human element of communications strategy. This suite is purpose-built to help PR, Comms, and Public Affairs professionals significantly improve productivity, enhance message clarity, and facilitate early risk detection.
Lumina enables communicators to:
Understand & Interpret: Move beyond basic alerts to strategically map the trajectory and spread of narrative evolution.
Focus & Personalise: Achieve the clarity necessary to execute strategic action before critical moments pass.
We are launching the Lumina suite by making our first module immediately available: Stories & Perspectives.
In the current fragmented, multi-channel media environment, communications professionals need to be able to instantly perceive not just how a story is growing, but also how it is being perceived across different stakeholder groups.
Stories & Perspectives organizes raw media mentions into clustered, cohesive Stories, and the Perspectives that exist within each, reflecting distinct media, audience, and public affairs angles. This unique functionality allows users to:
Rise above the noise: Instantly identify which high-level topics are gaining momentum or fading from attention.
Get to the detail, fast: Uncover the influential voices, niche communities, and specific channels actively shaping the narrative.
Catch the pivot point: Precisely identify the moment a story shifts—from a strategic opportunity to a reputation risk—or when a new key opinion former begins guiding the conversation.
"Media isn’t a stream of mentions," said Kyle Lindsay, Head of Product at Pulsar Group. "But rather a living system of stories shaped by competing perspectives. When you can see those structures clearly, you gain the ability to understand issues as they form, anticipate how they’ll evolve, and act with precision. That’s what we mean when we talk about AI built for communicators, and that's what an off-the-shelf LLM can't give you."
The Lumina Roadmap: AI tools for the future of comms
The launch of Stories & Perspectives is the first release of many. Over the upcoming months, we will systematically roll out the full Lumina roadmap, introducing a comprehensive set of AI tools engineered to handle every phase of the communications lifecycle.
The full Lumina suite will soon incorporate:
Curated media summaries: AI-driven daily summaries customized specifically to the priorities of senior leadership, highlighting only the most relevant stories.
Reputation analysis: Advanced measurement tracking how critical themes like ethics, innovation, and leadership are statistically shaping corporate perception.
Press release & media relations assistant: Tools designed to accelerate content creation and craft hyper-focused, personalized pitches that reach the precise contacts faster.
Predictive intelligence layer: Technology engineered to track and anticipate story momentum and strategic change before the window of opportunity closes.
Intelligent agents: Background agents continuously scanning all media channels for emerging key spokespeople and previously undetected reputation risks.
Enhanced audio, broadcast & crisis detection: Complete, real-time oversight of all channels—including audio and broadcast—enabling rapid context building and optimal crisis response delivery.
Want to harness the power of Lumina AI for your PR, Comms, or Public Affairs team? .
Complete the form below to register your interest.
"
["post_title"]=>
string(79) "Announcing Lumina: The purpose-built AI suite for PR, Comms, and Public Affairs"
["post_excerpt"]=>
string(129) "An intelligent suite of AI tools trained on the language, workflows, and realities of modern public relations and communications."
["post_status"]=>
string(7) "publish"
["comment_status"]=>
string(4) "open"
["ping_status"]=>
string(4) "open"
["post_password"]=>
string(0) ""
["post_name"]=>
string(76) "announcing-lumina-the-purpose-built-ai-suite-for-pr-comms-and-public-affairs"
["to_ping"]=>
string(0) ""
["pinged"]=>
string(0) ""
["post_modified"]=>
string(19) "2025-12-09 09:39:52"
["post_modified_gmt"]=>
string(19) "2025-12-09 09:39:52"
["post_content_filtered"]=>
string(0) ""
["post_parent"]=>
int(0)
["guid"]=>
string(32) "https://www.isentia.com/?p=43742"
["menu_order"]=>
int(0)
["post_type"]=>
string(4) "post"
["post_mime_type"]=>
string(0) ""
["comment_count"]=>
string(1) "0"
["filter"]=>
string(3) "raw"
}
Blog
Announcing Lumina: The purpose-built AI suite for PR, Comms, and Public Affairs
An intelligent suite of AI tools trained on the language, workflows, and realities of modern public relations and communications.