Blog
Is Social Media A Good Source For News?
With more than 1 billion users on Facebook, and millions more active on sites such as YouTube and Twitter, it has become obvious that social media is an important platform for businesses.
It’s been a whirlwind year trying to keep up with the various changes made by social media platforms – especially for professional communicators, developers, agencies, and brands.
At the same time as understanding and usage of the term ‘API’ has accelerated across offices worldwide, social media platforms have begun to restrict access to their application programming interfaces (APIs). With implications ranging from global politics to individual user privacy, that trend is showing no signs of stopping.
API changes have been introduced in order to reduce risks around data privacy, security concerns for users and stamping out improper use of user data.
Most of the changes can be categorised as:
Generally, these are positive changes for the whole ecosystem. Users can be reassured at an individual level that there are more controls in place and consideration given to matters of privacy and the prevention of misuse. Facebook’s ‘Here Together’ video, released in the aftermath of the Cambridge Analytica data breach, reflects some of this desired messaging and the drives for these changes.
Here’s how the changes impact the three types of Instagram analysis:
These may not be the last of the changes, but they are necessary growing pains to regain user trust and provide higher quality authentic engagements. For small businesses and influencers these changes are fairly straightforward – however for those looking to manage communications or marketing strategies they present new challenges in order to stay informed.
These changes apply across the board so all API users will need to jump the same hoops and prove that both privacy measures are met and use of data is acceptable. If you’re interested, you can learn more about the official changes from Instagram read on here.
Loren is an experienced marketing professional who translates data and insights using Isentia solutions into trends and research, bringing clients closer to the benefits of audience intelligence. Loren thrives on introducing the groundbreaking ways in which data and insights can help a brand or organisation, enabling them to exceed their strategic objectives and goals.
Connecting with the huge variety of consumers already on these sites can open up significant opportunities for marketing and lead generation. Additionally, social media monitoring provides insight and understanding into how your industry, audience and competitors are reacting to market trends and products.
As well as giving businesses and consumers a platform to share their thoughts and participate in ongoing conversations, social media is also a channel through which many people access news stories and important information.
A recent study from Pew Research found that 64 per cent of adults are active on Facebook, and 30 per cent are using the site to receive news. This means that approximately half of the people using Facebook trust the site to deliver their news.
Not only are users reading news on social media, but they are also participating in the sharing and telling of stories. Half of all social network users have shared news stories on their own profiles and a further 46 per cent have discussed news on social media.
However, while social networking sites are a popular media through which to access news, Pew Research found that users on these sites spend significantly less time engaging with the news they read.
Readers who visit news stories directly through a provider's website spend an average of 4 minutes and 36 seconds on each page. In comparison, those who arrive through a link on Facebook spend just 1 minute 41 seconds reading the page.
This shows that while news is being shared and read on social media sites, engagement is significantly greater when consumers go out of their way to access the stories.
" ["post_title"]=> string(39) "Is Social Media A Good Source For News?" ["post_excerpt"]=> string(187) "With more than 1 billion users on Facebook, and millions more active on sites such as YouTube and Twitter, it has become obvious that social media is an important platform for businesses." ["post_status"]=> string(7) "publish" ["comment_status"]=> string(4) "open" ["ping_status"]=> string(4) "open" ["post_password"]=> string(0) "" ["post_name"]=> string(38) "is-social-media-a-good-source-for-news" ["to_ping"]=> string(0) "" ["pinged"]=> string(0) "" ["post_modified"]=> string(19) "2019-09-24 08:55:44" ["post_modified_gmt"]=> string(19) "2019-09-24 08:55:44" ["post_content_filtered"]=> string(0) "" ["post_parent"]=> int(0) ["guid"]=> string(43) "https://isentiastaging.wpengine.com/?p=1874" ["menu_order"]=> int(0) ["post_type"]=> string(4) "post" ["post_mime_type"]=> string(0) "" ["comment_count"]=> string(1) "0" ["filter"]=> string(3) "raw" }With more than 1 billion users on Facebook, and millions more active on sites such as YouTube and Twitter, it has become obvious that social media is an important platform for businesses.
In the wake of the Facebook Cambridge Analytica scandal, there have been a myriad of changes impacting users of Facebook and Instagram content recently. These changes were made without any notice and were effective immediately which has impacted third-party apps worldwide.
Albeit the speed in which the changes have been made is likely to have been partly driven by the pressure to tighten data practices and potentially align certain timing as CEO Mark Zuckerberg prepares to testify before Congress next week to answer questions about the company’s privacy and data policies.From the perspective of everyday users accessing the content you know and love via the Facebook and Instagram apps will see little to no change. For developers like us on the other hand, the impacts are significant and are only a hint of what is yet to come.In case you missed it, the changes made have been many and impact all third-party apps, whether legitimate or not.
Given the changes have been quick, varied and came without prior notification, we’ve pulled together a quick summary of a few that left developers and other third-party content users of these content feeds frustrated:
This means something as simple as code to access recent posts of a public company, suddenly stopped working. Quick changes had to be made to use alternative methods.
Fields like how many followers a user has, or how many posts you have made, but many more have gone.
The Instagram API restricted the flow of content by 25x, meaning that public posts previously being collected has been reduced significantly, requiring different approaches to be taken that are more efficient.
These are only a few of the changes that have happened with more expect in future. With CTO Mike Schroepfer commenting that they will lock down access, review previously allowed apps, and then hand out access to the apps that deserve it.
While this is promising from the perspective that Facebook is taking action to breath some confidence back into their data practices, it will still be interesting to see how they now start to crack down on third-party apps that are using and abusing content. With the advent of AI and machine learning, the content which appeared innocuous can now be exploited and abused in the wrong hands. That means Facebook is forcing all apps that have previously been approved for accessing Events, Groups and Pages, have to be reviewed again.
For the developers working on these changes behind the scenes, it's a difficult process but something we monitor constantly to ensure the client experience is supported, and uncompromised. While at times frustrating, it’s also fascinating to watch the complexities of today's interconnected environment play, shift and unfold.
Ian Young,
Isentia Technical Architect
In the wake of the Facebook Cambridge Analytica scandal, there have been a myriad of changes impacting users of Facebook and Instagram content recently. These changes were made without any notice and were effective immediately which has impacted third-party apps worldwide.
AI has become a powerful stakeholder in its own right — from being just another ‘technological advancement’ to an active contributor to modern-day communications, that’s massively changed the media landscape today.
Isentia hosted an essential conversation with Lisa Main (Director, Main Bureau), Dr Nici Sweaney (Founder and Director, AI Her Way), Prashant Saxena (Isentia’s VP of Revenue and Insights, SEA), and Ngaire Crawford (Isentia’s Director of Insights, ANZ). Together, they explored how AI reshapes the world of communications and corporate affairs all the while figuring out how to manage and strategically engage with it.
In this session, we covered:
Following the webinar, our panellists took the time to answer the most insightful questions from our attendees that we couldn't get to during the live session. Here are their expert perspectives.
As the Founder and Director of AI Her Way, Dr Nici Sweaney advocates for a strategic approach to AI that prioritises human intent over technical capability. The questions directed to her focused on the ethical foundations of AI, how organisations should structure their internal AI strategy, and practical ways to start using agents today.
Ethical AI, to me, is about two things working together: avoiding harm and actively doing good. It’s not just “don’t break anything” — but genuinely asking, does this create value for the business, for the people using it, and for the broader world? Transparency, equity, and accountability are the pillars. Transparency means being honest with your audience and colleagues about when AI is involved. Equity means asking who this helps and who it leaves behind, as AI scales existing biases. Finally, accountability means humans stay in the loop. AI should inform decisions, not make them. When the "why" is clear — like saving a team time to focus on strategy — you are using AI with integrity.
My answer is probably not what IT wants to hear. AI is part of your infrastructure, so IT must be involved for security and guardrails. However, the strategy behind adoption is fundamentally a human problem, not a technical one. I advocate for a cross-functional "coalition" that brings IT, HR, communications, and strategy to the same table. If you create a dedicated AI leadership role, that person should sit closer to human-centric functions like HR and communications. The hardest part of adoption isn’t the technology; it’s the people, the culture, and the narrative you build around it internally.
First, acknowledge that the fear is real; it is a biological response to an unprecedented rate of change. Trust is built through honesty. Pretending AI won’t displace roles destroys trust, so be honest about how the landscape is shifting. What actually moves people is showing, not telling. Show them how AI can solve their specific "pain points" — the tedious, joyless tasks that don't add value. When people see AI as an "empowered choice" that uplifts their work rather than replacing their judgment and strategic thinking, buy-in follows. Build confidence with small wins first.
Most professionals don’t need complex autonomous agents yet; they need custom bots and automated workflows. The magic is in understanding your process first. Some practical starting points include:
Answer: Your instinct is right. If your team uses free consumer tools, your data may be used to train future models. You should move to enterprise-grade tools like Claude for Teams, Microsoft Copilot, or ChatGPT Enterprise, which offer contractual data protections. You should also build an AI Usage Policy that defines which data is public, internal, or restricted, and map AI rules to those classes. In Australia, we recommend aligning with the EU AI Act — the most comprehensive framework available — to future-proof your organisation.
Prashant Saxena, Isentia’s VP of Revenue and Insights for SEA, approaches AI through the lens of psychological bonding and media structural shifts. His insights address the changing role of media and the technical ways we must now communicate to satisfy AI as a new audience.
Media's value is shifting from being the "trusted narrator" for humans to being the "training signal" for AI. When AI models generate answers, they weight authoritative media sources much more heavily than random web content. Even as human trust erodes, media’s structural influence on AI-generated information is growing. For communicators, "earned media" now serves two audiences simultaneously: the humans who read it and the machines that learn from it. Publications with strong editorial standards become more valuable because AI systems use domain authority and editorial signals as quality proxies.
AI models don't "rank" sources like Google does. They weight information based on source authority, recency, consistency, and structured data quality. If five credible outlets report the same fact, that fact becomes a "high-confidence training signal." This means volume across credible sources matters more than a single "big hit." For your strategy, consistency of messaging across all placements is vital because AI looks for corroboration. Factual, entity-rich statements will be picked up more reliably than narrative-heavy feature writing.
This is the core of my PhD research. It is what I call "synthetic authenticity." AI systems deploy cues like warmth and memory that we evolved to interpret as human. These trigger "parasocial bonding" — the same mechanism that makes you trust a friend’s recommendation. The danger is that cognitive awareness (knowing it’s AI) doesn't override the emotional feeling. We need a new kind of literacy that teaches people to recognise when their "trust response" is being activated by design rather than by a genuine relationship.
Yes. This is a very practical move. AI models extract information more reliably from structured formats. A Q&A format gives the AI clear question-answer pairs that map to how people query systems. You should also focus on "AI-readable claims" — entity-rich, factual statements. Instead of saying "We are committed to sustainability," say "Our Singapore operations reduced carbon emissions by 34% between 2023 and 2025." The second version is a verifiable fact an AI can actually use and cite.
This is the new frontier. Traditional monitoring tracks what humans publish; AI sentiment monitoring tracks what AI systems say about your brand when asked. Since there is no single "AI sentiment" (ChatGPT, Grok, and Claude all give different answers based on their training), you need to monitor across platforms. We are developing capabilities to systematically query these platforms to see how their narratives change over time and identify which source materials are driving those answers.
Every model reflects the values, training data choices, and alignment decisions of its creators. ChatGPT (OpenAI) tends towards cautious, balanced responses with strong content guardrails. Conversely, Grok (xAI) was explicitly designed to be less filtered, sometimes surfacing perspectives that other models suppress. Claude (Anthropic) prioritises honesty and nuance. For communicators, this means your brand's narrative varies by platform; you must monitor across multiple models because the same question about your brand will receive materially different answers depending on which tool is used.
Major publishers like the New York Times and Reuters have blocked AI crawlers, creating a gap in training data. When authoritative journalism is unavailable, AI models may fill that gap with lower-quality content or brand-owned content. For communicators, this means your "owned content" — such as your website, blog, and structured data — carries proportionally more weight in AI-generated answers. Your media targeting strategy now needs to account for which outlets are AI-accessible, as they will be disproportionately influential in shaping your narrative.
Ngaire Crawford, Isentia’s Director of Insights for ANZ, emphasises the role of the analyst. Her approach is characterised by a "rhythm of interrogation," arguing that the most effective way to use AI is through constant questioning and a focus on high-authority inputs.
I was initially very sceptical, but it is now part of my every day. I use models like Claude and Gemini to workshop conference outlines, plan education programmes, update code, and structure strategic thinking. My best practice advice is to develop a "rhythm of interrogation." Don't just accept the first answer; ask for evidence and challenge the output. While AI saves time on technical tasks like coding, for strategic work it simply shifts the "mental load." You spend the same amount of time, but the depth and quality are significantly improved because you aren't starting from a blank page.
It's important to know that models are optimised to give the most useful answer, not necessarily the most accurate one. They are pattern-completing, not fact-checking. Because model responses are not fixed and change based on the conversation, I suggest focusing on the "controllable inputs" that feed them. This includes your own website, company material, Wikipedia data, and review sites (including employee reviews). Ensuring these bases are telling the intended story is the absolute best starting point for managing AI "sentiment."
There is no "PageRank" to reverse-engineer here. Models are shaped by what was prominent and widely cited in their training data. Practically, this means a shift from volume to authority. A hundred pieces of low-quality coverage do less work than ten pieces in genuinely credible outlets (major mastheads, industry publications, or your own well-structured site). The question for the modern communicator isn't "did we get coverage?", it's "does the coverage that exists, taken as a whole, tell a coherent and credible story?" AI reads the whole picture, not just the highlights reel.
Honestly? We don’t know yet. The commercial layer of AI is being figured out in real time. The moment someone wonders if they are getting the "best" answer or a "sponsored" one, trust erodes. However, we still click Google ads, so it will likely happen. What's important is that organisations that "earned" their reputation through authoritative presence before the ad market caught up will be in a much stronger position than those trying to buy a shortcut later.
The insights from our panellists make one thing clear: AI is no longer a tool of the future; it is a stakeholder of the present. To lead with credibility in this new era, communicators must pivot from chasing volume to building authority. Whether it is through adopting a rigorous ethical framework, optimising content for AI readability, or maintaining a "rhythm of interrogation" with the tools we use, the goal remains the same: ensuring our brand narratives are coherent, credible, and human-led.
The tools have finally caught up to the ambitions of our industry. Now, it is up to us to provide the architect's blueprint for how they are used.
Interested in viewing the whole recording? Watch our webinar here.
Alternatively, contact our team to learn more insights into meaningful measurement, KPIs and communicating using the right dataset.
In this blog, panelists from our recent webinar on “AI as a stakeholder” get to answering all your burning questions.
The media landscape is accelerating. In an era where influence is ephemeral and every angle demands instant comprehension, PR and communications professionals require more than generic technology—they need intelligence engineered for their specific challenges.
Isentia is proud to introduce Lumina, a groundbreaking suite of intelligent AI tools. Lumina has been trained from the ground up on the complex workflows and realities of modern communications and public affairs. It is explicitly designed to shift professionals from passive media monitoring back into the role of strategic leaders and pacesetters.

“The PR, Comms and Public Affairs sectors have been experimenting with AI, but most tools have not been built with their real challenges in mind.” said Joanna Arnold, CEO of Pulsar Group.
“Lumina is different; it is the first intelligence suite designed around how narratives actually form today, combining human credibility signals with machine-level analysis. It helps teams understand how stories evolve, filter out noise and respond with context and confidence to crises and opportunities.”
Lumina is centered on empowering, not replacing, the human element of communications strategy. This suite is purpose-built to help PR, Comms, and Public Affairs professionals significantly improve productivity, enhance message clarity, and facilitate early risk detection.
Lumina enables communicators to:
We are launching the Lumina suite by making our first module immediately available: Stories & Perspectives.

In the current fragmented, multi-channel media environment, communications professionals need to be able to instantly perceive not just how a story is growing, but also how it is being perceived across different stakeholder groups.
Stories & Perspectives organizes raw media mentions into clustered, cohesive Stories, and the Perspectives that exist within each, reflecting distinct media, audience, and public affairs angles. This unique functionality allows users to:
"Media isn’t a stream of mentions," said Kyle Lindsay, Head of Product at Pulsar Group. "But rather a living system of stories shaped by competing perspectives. When you can see those structures clearly, you gain the ability to understand issues as they form, anticipate how they’ll evolve, and act with precision. That’s what we mean when we talk about AI built for communicators, and that's what an off-the-shelf LLM can't give you."
The launch of Stories & Perspectives is the first release of many. Over the upcoming months, we will systematically roll out the full Lumina roadmap, introducing a comprehensive set of AI tools engineered to handle every phase of the communications lifecycle.
The full Lumina suite will soon incorporate:
Want to harness the power of Lumina AI for your PR, Comms, or Public Affairs team? .
Complete the form below to register your interest.
Get in touch or request a demo.