Detection of Crowd Manipulation in Social Media
Navy STTR 2019.A - Topic N19A-T024
ONR - Mr. Steve Sullivan - steven.sullivan@navy.mil
Opens: January 8, 2019 - Closes: February 6, 2019 (8:00 PM ET)

N19A-T024

TITLE: Detection of Crowd Manipulation in Social Media

 

TECHNOLOGY AREA(S): Information Systems

ACQUISITION PROGRAM: Distributed Common Ground/Surface System-Marine Corps (DCGS-MC)

OBJECTIVE: Develop information stream analysis models and analytic tools to detect, characterize, and visualize computational propaganda to detect influence campaigns and propaganda that target the emotions of anger, hate, fear, and disgust. Ensure that the proposed capability is able to indicate and analyze influence campaigns in progress and evaluate their potential impacts on target audiences.

DESCRIPTION: Operating in the information environment today is highly challenging for Navy, Marine Corps, and other military warfighters. The information environment includes multiple platforms, social communities, and topic areas that are polluted with disinformation and attempts to manipulate crowds, spread rumor, and instigate social hysteria. Polarization of crowds is a significant problem with nation-state actors conducting malicious campaigns to spread and amplify civil discontent and chaotic social dynamics—usually by manipulating the emotional mood of crowds. Hate, anger, disgust, fear, and social anxiety are heightened using computational propaganda. Current “sentiment” models are poorly suited to measuring emotional content in online media. These measures are not currently well synchronized with measurements of manipulation by information actors who are intent on subverting civil discourse and discrediting the messages of civil authorities.

The current state-of-the-art in botnet detection merely identifies automated features such as identical content, identical targets, coordination of message dispersal, and similar measurable indicators. Hybrid or “cyborg” content distributors and distributors who use different types of artificial intelligence enhanced capabilities (“smart” botnets) make detecting manipulated discourses more difficult. Crowds of inflamed audiences can result from a relatively small “signal” of inflammatory texts, pictures, messages, and videos, amplified just enough to “catch hold” in an already unstable environment. This occurred in Sudan when messages about Benghazi in Libya caused mobs to attack embassies—with the British embassy set on fire, only hours after the first messaging began. Current state-of-the-art approaches rely on older algorithms (such as Lingusitc Inquiry and Word Count, LIWC) to evaluate messaging; more sophisticated models such as Ekman’s model of emotions or Russell’s circumplex model and Scherer’s update of the model [Refs 1, 2] have been used for measuring and evaluating emotions in blog posts. Natural Language Processing (NLP) models have also been used meaningfully in research settings [Ref 3]. “Feeling offended,” a complex emotional state, has also been studied by E’Errico and Poggi [Ref 4].

Typically, bot-detection methods today indicate the presence or absence of bots, with some degree of accuracy [Ref 5]. Sentiment analysis capabilities are very limited to at best a three-state scale (positive, negative, neutral). In crisis situations and emergencies, sentiment analysis is of little value. It cannot tell the operator what kinds of negativity are in play or what types of emotional issues are being expressed. Anger, fear, hate, disgust, and propaganda-fueled discourses require further unpacking. Communicators need to identify the gists, the stories, and ultimately, the narratives that are in play as well as have available effective models of complex emotions in order to develop an appropriate understanding of an influence campaign.

The scope of the topic is to develop tools that can find the existence of an attempt to influence/mislead people through social networks or other IT means, assess the emotions it’s triggering, and the level of the reaction in real-time and provide this info to multiple network users in a cloud environment

Technologies proposed under this topic might include models and analytic tools for measuring the emotional content and logical fallacies such as ad hominem arguments, ground shifts, and other rhetorical devices [Ref 6] commonly used in propaganda techniques online. Capabilities to identify propaganda may be separate or synchronized with capabilities to identify, evaluate, and describe emotional content. Methods and techniques for creating baselines of emotional responses of audiences to civil authority messaging would be helpful. Initial algorithms, models, and tools are expected to use simulated data, though real-world cases can be used in development. Capabilities that include cultural aspects of crowd manipulation in non-English speaking contexts would be considered particularly responsive to this topic.

PHASE I: Develop prototype algorithms, models, and tools that use Government supplied synthetic data supplemented by case studies to demonstrate a proof of concept to identify computational propaganda content and emotional valences of messages in Twitter, including indicators of manipulation and the capability to segment actor communities (i.e., botnet, bot-enhanced, human). Integrate simple models of emotions (such as Ekman’s model)  and consider using more sophisticated, finer grained models (such as Russell’s model with Scherer’s updates). Note: These models are considered to be illustrative; developers are free to use other models of emotions. Ensure that the prototype successfully identify sets of messages, gists, and stories; determine their emotional content in a general sense; estimate whether these sets are likely to represent manipulated discourse; and visualize the discourse by gist (topics) and story (such as URL). Develop a Phase II plan.

PHASE II: Develop the models of emotion and propaganda so as to be able to identify computational propaganda, its emotional valences and arousal state. Estimate the degree of artificial manipulation present in gists and stories present in live information streams from Twitter, websites, and blogs. Ensure that model results are exportable to other tools (such as social network tools, visualization tools, databases and dashboards). Make available to the Navy a user-friendly, working prototype with built-in help capabilities for testing and evaluation in a cloud-based environment by multiple users in the context of an online military virtual tabletop as the final technical demonstration of this project. Conduct and complete model development and validation prior to Phase III.

PHASE III DUAL USE APPLICATIONS: Apply the knowledge gained in Phase II to further develop the interface, capabilities, and training components needed to make the technologies able to transition to military customers. Make the technologies available on an existing cloud platform of the customer’s choosing (e.g., SUNNET, Navy Tactical Cloud, Amazon Cloud) working with cloud owners to deliver a subscription-based tool interoperable with other tools in enclave settings. Expand and develop the model to cope with real-time information flows and evolving information tactics.

The problem of detecting influence campaigns designed to disrupt the credibility of organizations is highly needed, world-wide. Western humanitarian organizations, international brands, and civil society organizations are continually under assault in the information environment by “trolls” and other malign actors for political and apolitical purposes. Currently there is little available on the market for this capability; scientific models of emotional modeling applied to social media are relatively new.

REFERENCES:

1. Langroudi, George, Jourdanous, Anna, and Li, Ling. “Music Emotion Capture: sonifying emotions in EEG data.” Symposium on Emotion Modeling and Detection in Social Media and Online Interaction.  5 April 2018, University of Liverpool. https://www.emotiv.com/independent-studies/music-emotion-capture-sonifying-emotions-in-eeg-data/

2. Harvey, Robert, Muncey, Andrew, and Vaughan, Neil. “Associating Colors with Emotions Detected in Social Media Tweets.” Symposium on Emotion Modeling and Detection in Social Media and Online Interaction.  5 April 2018, University of Liverpool. https://docplayer.net/82902361-Symposium-on-emotion-modelling-and-detection-in-social-media-and-online-interaction.html

3. D’Errico, Francesca and Poggi, Isabella. “The lexicon of being offended.” Symposium on Emotion Modeling and Detection in Social Media and Online Interaction. 5 April 2018, University of Liverpool. https://www.researchgate.net/publication/326096901_The_lexicon_of_feeling_offended

4. Badugu, Srinivasu and Suhasini, Matla. “Emotion Detection on Twitter Data Using Knowledge Base Approach.” International Journal of Computer Application, Volumbe 162, No 10. March 2017. https://pdfs.semanticscholar.org/6698/5a996eab1e680ffdd88a4e92964ac4e7dd56.pdf

5. Agarwal, Nitin, Al-Khateeb, Saamer, et. Al. “Examining the Use of Botnets and Their Evolution in Propaganda Dissemination.” Defense Strategic Communications. Vol 2, Spring 2017. https://www.stratcomcoe.org/nitin-agarwal-etal-examining-use-botnets-and-their-evolution-propaganda-dissemination

6. Dijck, Jose and Poell, Thomas. “Understanding Social Media Logic.” Media and Communication, August 2013, Vol 1, Issue 1,pp. 2-4. https://www.cogitatiopress.com/mediaandcommunication/article/view/70/60

KEYWORDS: Social Media, Computational Propaganda, Crowd Manipulation, Social Hysteria, Rumor

 

** TOPIC NOTICE **

These Navy Topics are part of the overall DoD 2019.A STTR BAA. The DoD issued its 2019.1 BAA STTR pre-release on November 28, 2018, which opens to receive proposals on January 8, 2019, and closes February 6, 2019 at 8:00 PM ET.

Between November 28, 2018 and January 7, 2019 you may communicate directly with the Topic Authors (TPOC) to ask technical questions about the topics. During these dates, their contact information is listed above. For reasons of competitive fairness, direct communication between proposers and topic authors is not allowed starting January 8, 2019
when DoD begins accepting proposals for this BAA.
However, until January 23, 2019, proposers may still submit written questions about solicitation topics through the DoD's SBIR/STTR Interactive Topic Information System (SITIS), in which the questioner and respondent remain anonymous and all questions and answers are posted electronically for general viewing until the solicitation closes. All proposers are advised to monitor SITIS during the Open BAA period for questions and answers and other significant information relevant to their SBIR/STTR topics of interest.

Topics Search Engine: Visit the DoD Topic Search Tool at sbir.defensebusiness.org/topics/ to find topics by keyword across all DoD Components participating in this BAA.

Proposal Submission: All SBIR/STTR Proposals must be submitted electronically through the DoD SBIR/STTR Electronic Submission Website, as described in the Proposal Preparation and Submission of Proposal sections of the program Announcement.

Help: If you have general questions about DoD SBIR program, please contact the DoD SBIR Help Desk at 800-348-0787 or via email at sbirhelp@bytecubed.com