Results for ""
Disinformation is the world's greatest enemy currently. It has the power to change elections, strengthen conspiracy theories and increase enmity and discord between individuals, communities and countries.
To combat this digital-age monther, Steven Smith, a member of the MIT Lincoln Laboratory’s Artificial Intelligence Software Architectures and Algorithms Group, has been on a team that has been on a quest to understand these disinformation campaigns through the Reconnaissance of Influence Operations (RIO) program. The team aims to create a system to automatically detect disinformative narratives and accounts and people who are responsible for spreading these harmful narratives on social media networks. RIO has been awarded an R&D 100 award in October, last year; early this year, the team presented a paper on RIO in the Proceedings of the National Academy of Sciences.
The seed to work on RIO was sowed in 2014 when Smith was researching how malicious groups could exploit and mislead people on social media. Along with his colleagues, he observed heightened, unusual activities in social media data from accounts that seemed to be promoting pro-Russian narratives.
"We were kind of scratching our heads," Smith says of the data. This inspired the team to apply to the laboratory’s Technology Office for internal funding and launched a program to study whether similar techniques would be used in the 2017 French elections.
The ROI collected real-time social media data in the 30 days leading up to the elections to search and analyse the extent and effect of disinformation. The data was over 28 million tweets from 1 million Twitter accounts. When this information was studied by the RIO system, the team was able to decipher 96% of the material with disinformation.
RIO's uniqueness lies in its ability to use and study multiple analytic techniques to present a comprehensive video of the origins of the spread of disinformation.
"If you are trying to answer the question of who is influential on a social network, traditionally, people look at activity counts," says Edward Kao, who is also part of the research team. On Twitter, for example, analysts would consider the number of tweets and retweets. "What we found is that in many cases this is not sufficient. It doesn’t actually tell you the impact of the accounts on the social network," said Kao.
Kao has developed a statistical approach to determine whether a social media account is spreading disinformation and how the account can influence a network to change in order to amplify the disinformation. This was part of Kao's PhD work under the laboratory’s Lincoln Scholars program, a tuition fellowship program.
Erika Mackin, another research member on RIO, worked to apply a novel machine learning approach to help RIO classify these malicious accounts by observing data related to behaviours such as if and how the accounts interact with foreign media and the language medium of communication. The approach helped RIO identify hostile accounts that actively spread disinformation during various important campaigns, ranging from the 2017 French presidential elections to the spread of Covid-19 disinformation.
"Another unique aspect of RIO is that it can detect and quantify the impact of accounts operated by both bots and humans, whereas most automated systems in use today detect bots only. RIO also has the ability to help those using the system to forecast how different countermeasures might halt the spread of a particular disinformation campaign," states the press release on the MIT website.
RIO can be used by both, the government and the industry beyond socila media and apply it to traditional media outlets such as newspapers and television. Joseph Schlessinger a graduate student at MIT and a military fellow at Lincoln Laboratory works on the project to understand the spread of information across European media outlets. The team is also working on a program to dive into cognitive aspects of influence operations and individual behaviours and attitudes are affected by disinformation.
“Defending against disinformation is not only a matter of national security, but also about protecting democracy,” says Kao.