Results for ""
In the last few years, we have seen an explosion of content on the internet, which has led to the ever-growing menace of fake news. From planting political agendas to causing panic during the COVID-19 pandemic, there’s so much misinformation floating on the web that it is sometimes difficult for people to differentiate between ‘true’ and ‘false’.
Such stories are intentionally created with clickbaity headlines, so that people believe what’s written and start sharing it with their networks, thereby forming a web of misinformation. This becomes even more significant in situations like the COVID-19 pandemic, wherein every piece of information is critical.
Today, fake news is largely propagated on popular social media platforms like Facebook and Twitter, and messaging platforms like Whatsapp, where billions of people are active every day. Recently, Facebook released a new report to inform that it uses a combination of artificial intelligence and human fact-checkers to enforce its community standards. The findings are from the last three to six months, and the report has a significant focus on AI instead of relying entirely on human moderators.
As a result, in April 2020 alone, Facebook put labels on about 50 million posts related to COVID-19. Moreover, since March, the social media platform has also removed more than 2.5 million pieces of content for the sale of masks, hand sanitisers, surface disinfecting wipes and test kits.
But is this enough to combat the parallel pandemic of fake news?
Earlier, the world relied on the word of journalists, who would go out and report stories on the ground, before publishing it. Today, while this trend does exist, it is complemented with an overload of information. In the race of 24 x 7 news, there’s also an urgency to put out information and populate websites, without fact-checking and verifying the news information. This, in turn, propagates inauthentic content.
There are even several AI tools that can be used to disseminate fake news, one being a text-generator built by research firm Open AI. Much has been spoken about deepfakes, a term coined by a Reddit user in 2018, that uses deep learning to create images of fake events. In September 2019, AI firm Deeptrace found 15,000 deepfake videos online, most of which have been weaponised to incite hate and spread malice.
Unsurprisingly, most of this misinformation is disseminated through social media. Today, there are over 2.4 billion active Facebook users all over the world, while Whatsapp has 1.6 billion users. While efforts are being made to eliminate fake news, there’s still a lot to be done.
Recently, Facebook also announced its collaboration with the World Health Organisation to tackle COVID-related misinformation. The platform is offering free advertising to the organisation, so that it can spread the right information to its user base.
While AI technology can be used to create misinformation, it also helps to combat it. It can be effectively used to identify and eliminate fake news. In fact, in the last few years, through the use of several algorithms, AI has been able to identify patterns to distinguish between human and machine-generated content successfully.
These algorithms are created by feeding them existing articles from various fake news sources that also contain sets of authentic information and references. Some AI-powered analytical tools can also include stance classification to determine whether a headline matches with the article body or not. This is done by processing the text to analyse a writer’s style.
As mentioned earlier, social media is where the web of fake news multiplies by several folds, which is why Facebook, Google, Twitter and YouTube have come together to limit and eliminate misinformation regarding the coronavirus pandemic and push official guidelines on their platforms.
Even before COVID-19, startups like MetaFact have been using AI to detect and monitor fake news in real-time. The fact-checking website does not aim to dilute the work done by journalists; rather it aims to complement them in combating the issue of fake news.
“At MetaFact, we always believed that technology would be a new frontier in tackling the issue of information disorder (fake news), but we also very strongly believed that taking human intervention out of the equation will not be practical and a sustainable model. And that’s one of the reasons why we always emphasised on the fact that at Metafact we believe that the AI tool will support the work a journalist does and not replace them,” shared Sagar Kaul, Founder & CEO of MetaFact.
MetaFact also aims to build a trust layer over the internet by harnessing the power of AI. As a first part of the series, the startup is introducing AI into newsrooms to help journalists validate news and do enriched reporting by bringing down the price, time and effort to do so with less scope of errors.
“Discovering content with intelligent angles/insights will help media companies who are working to increase their subscription revenues through such enriched reporting. We believe that fake news is a byproduct of easy access to social media platforms that enables common users to publish content without the need of verification.”
“This puts the current media ecosystem at a loss due to non-availability of tools that can detect, monitor and investigate such claims in real-time which will help in reducing the time and cost it otherwise would take for such fact-checking. We are creating tools and distribution platforms that will empower journalists and media houses in their fight against the ever-rising issue of fake news by putting the privacy, data interoperability & cognitive computing prowess right in their control,” added Sagar.
Their tool also uses Natural Language Processing to understand the context of the news articles, blog posts and social media posts and thereby performs cognitive operations which include bucketing, indexing and trust scoring; to give intelligent access to data. Functionalities ranging from filtering out “claims” type sentences from a sea of content across the web, which has interrogative, declarative and other kinds of sentence structures which often causes noise for the end-user. - investigative journalists who are looking to find out claims to debunk/investigate at a scale.
“From the filtered out claims with respective trust score, social virality index etc. is provided to journalists to start working on debunking claims with lower trust score. Concept highlights, semantic analysis and extractive summarisation help journalists deep dive their search as to whether they are looking for a word which is a person, organisation, location, drug or crime entity type. This heavily decreases ambiguity and redundancy in words that have the same syntax but carry completely different context. The output of the aforementioned search is generated in an enriched E-R visual graph which is easily exportable and covers angles that one might miss in a pure text-based interface. Exportability & compatibility of all graphs generated from the tool have been given utmost importance and has been set to a notably portable .svg type. Moving forward, abstractive summary generation would be possible right from the tool which will further help reduce the noise that a journalist has to go through to find insights and varied angles for generating a story,” explained Sagar.
There is increasing awareness around fake news, but since AI is a double-edged sword, it needs to be used in the right manner. With the proliferation of more sophisticated tools, the menace will eventually come to an end - after all, there is light at the end of the tunnel.
Read about the controversial text-generator built by research firm OpenAI here : Goldfinger
Image by geralt via Pixabay