Early Detection of Information Campaigns by Adversarial State and Non-State Actors
Navy SBIR 2019.2 - Topic N192-129
ONR - Ms. Lore-Anne Ponirakis - loreanne.ponirakis@navy.mil
Opens: May 31, 2019 - Closes: July 1, 2019 (8:00 PM ET)

N192-129

TITLE: Early Detection of Information Campaigns by Adversarial State and Non-State Actors

 

TECHNOLOGY AREA(S): Battlespace, Human Systems, Information Systems

 

ACQUISITION PROGRAM: Marine Corps Information Groups, Deputy Commandant of Information, the Joint Information Warfighter

 

OBJECTIVE: This SBIR topic will focus on attempts to detect hybrid, “cyborg” information actors, backing, aiding, and amplifying human networks distributing propaganda and highly charged messages. The current state of botnet detection merely identifies automated features such as identical content, identical targets, coordination of message dispersal, and similar measurable enhanced capabilities; “smart” botnets that target individuals (such as super spreaders and super friends) and topic groups are becoming more widespread and are capable of greater impact.

Sentiment models alone, and bot detection methods alone, are insufficient to detect and defend against these smart botnets that coordinate and amplify and normalize messages of hate, anger, and violence that are typical of cyber warfare.

 

DESCRIPTION: Online agitation has resulted in riots, attacks on tourists, ethnic violence, gender violence, instigation of cyber-attacks, murder, and terrorism (see references for a small list of examples). This agitation is aided and abetted by swarms of coordinated “bots”, “fake” accounts, and online loudspeakers of various types from single influential individuals to platforms like Twitter, Whatsapp, blogs, and YouTube that are subject to algorithmic manipulations, often combined with social engineering. Volatile content is combined with other types of messaging to exploit crises and create conditions of panic, uncertainty, and hate. Military missions are increasingly under attack by propaganda, distortion campaigns, and influence operations crafted by state and non-state actors to undermine social trust and diminish the military’s ability to control its own messages. Further, online agitation creates very real dangers in situations of crisis such as disasters and police actions where the military must deploy to secure the safety of civilians. State-backed adversaries have invested in artificial intelligence (AI) and data mining technologies to craft sophisticated “botnet armies” and other stochastic manipulations, the better to support human propagandists and online agitators. These need to be identified and assessed for vulnerabilities and impact; guidance for counter-measures would be the next needed step.


The information environment includes many social platforms used to pollute information streams with emotionally laden appeals, propaganda and rumors, and distortions designed to polarize crowds and propagate social hysteria. Malicious campaigns to create, spread, and amplify civil discontent, instigate arguments and manipulate audience perspectives have the potential to jeopardize military mission execution and to threaten warfighter and civilian safety. Current models are poorly suited to measure and evaluate this content in online environments. The desired capabilities would enable analysis of this content designed to impact cyber-social dynamics in topic groups.

 

Technologies under this topic might include new models and tools for detection and evaluation of stochastic manipulation, including the detection and assessment of coordinated botnets and high impact “fake” accounts. The desired capabilities would evaluate the activities of suspected fakes and bots and measure their tendencies to apply stochastic and social engineering techniques to agitate, misinform, and shape the perceptions of target audiences. Social-cyber dynamics of botnets and other kinds of fakes often depend on the mechanics of the platform as much as the payload (the content) of the messages. These botnets and fakes use “likes”, “upvotes”, “replies”, ‘comments,’ and “quotes” to become insinuated into communities and back certain attitudes and opinions over others. Botnets and “fake” accounts (and fake groups) on many platforms are trained, coordinated, and developed using a number of stochastic (algorithmic) and social engineering methods, depending on the platform. These methods are designed to position these propaganda actors within vulnerable communities, with both supportive and validating messages (to position them as sympathetic members of the social community) as well as polarizing, manipulative messages that can be deployed at key moments to exploit crises and situations of high anxiety.

 

Humans cooperate these campaigns—sometimes knowingly, sometimes unknowingly—by simply accepting bot followers and bot help to get their messages out. “Cyborg” accounts where the human has created “vanity” botnets of retweeters are relatively easy for existing botnet detection capabilities to identify. Bots and fakes that target influencers and generate clouds of apparent support for agitation ideas over the voices of others in the discourse are harder to distinguish. The developed technology should be able to: (1) go beyond current botnet detection capabilities to create algorithms that can distinguish patterns of botnet driven and stochastic manipulation, particularly those that are highly charged; (2) identify associations among botnets and cyborg accounts; and (3) visualize these relationships (such as linkages among followerships), the existence of broker accounts that link multiple communities, bot -training messages that reveal relationships among early bot nets, and other patterns that can help to distinguish natural, “organic” audiences from inorganic interlopers.

 

PHASE I: Develop sophisticated new capabilities to detect “cyborg” accounts, sophisticated fake accounts, and systems of coordinated botnets using prototyped algorithms, models and tools. Determine the feasibility of detection of suspect dormant bots and of “weaponized botnets” – botnets currently operating that latch on to crisis situations and high-flowing trends to infiltrate and steer online conversations and initial assessment of their activities. Develop metrics and methods for detection and analysis of sophisticated botnets. Provide guidance for identification of especially impactful bots promoting social hysteria, violent content, or engaging in suspicious activities suitable for the creation of TTPs (Tactics, Techniques, and Procedures) for identification and evaluation. A working software prototype capability is desirable. Prepare a Phase II plan.

 

PHASE II: Develop a technology that military operators can use to identify and evaluate coordinated botnets before and during deployment of weaponized content (e.g., propaganda, social hysteria propagation content, disinformation, and polarizing information). Develop early detection and warning indicators of coordinated bot networks, capability to scan accounts for dormant bots, and a capability for tracking and monitoring the activities of coordinated bot networks. Ensure that model results are exportable to other tools in use by U.S. Navy, Marine Corps, or other military information operations tool kits (examples include Scraawl, Talkwalker, Dataminr).

Develop a user-friendly interface that is available for testing and evaluation. Insert desirable built-in help features and guidance capabilities. Additional requirements would be developed for Phase III through engagement with stakeholders and potential customers.

 

PHASE III DUAL USE APPLICATIONS: Make these technologies available on an existing cloud platform (e.g., Sunnet, Navy Tactical Cloud, Amazon Cloud) and enable them to ingest live data streams from social media analysis platforms or from the Application Programming Interfaces (APIs) of social media directly, guided by stakeholder requirements and needs. Create expansion and development of models and capabilities, including


functions to create a database of coordinate botnets and dormant bots, interoperable with other tools. Develop capabilities to manage the database and address the needs of multiple customers. The product will enable commercial entities to monitor against botnet intrusion into their discourses, identify bot-net fueled information attacks, and develop counter-measures and strategies against fake discourses. This product will find markets in civil society organizations, diplomacy/government organizations, law enforcement entities, and crisis organizations attempting to quell social hysteria and defend against attempts to manipulate and deceive audiences and communities.

 

REFERENCES:

1.   Inyengar, Rishi. “WhatsApp has been linked to lynchings in India. Facebook is trying to Contain the Crisis.” Cable News Network (CNN), 30 September 2018. https://cnn.com/2019/09/30/tech/facebook-whatsapp- india/misinformation/index.html/

 

2.   Fandos, Nicholas, Roose, Kevin and Frankel, Sheera. “Facebook Has Identified Ongoing Political Influence.” New York Times (NYT), July 31, 2018. https://www.nytimes.com/2018/07/31/us/politics/facebook-political- campaign-midterms.html.

 

3.   McLaughlin, Timothy. “How Facebook’s Rise Fueled Chaos and Confusion in Myanmar.” Wired. https://www.wired.com/story/how-facebooks-rise-fueled-chaos-and-confusion-in myanmar/

 

4.   Goldman, Adam and Shoumali, Karam. “Saudis’ Image Makers: A Troll Army and a Twitter Insider.” NYT. October 20, 2019. https://www.nytimes.com/2018/10/20/us/politics/saudi-image-campaign-twitter.html/

 

5.   Bershidsky, Leonid. “Twitter’s Trolls are Coming for Sweden’s Elections.” Bloomberg News, 30 August 2018. https://www.bloomberg.com/view/articles/2018-08-30/the-online-twitter-trolls-are-coming-for-sweden/

 

6.   Nimmo, Ben, Czuperski, Maks and Brookie, Graham. “#BotSpot: The Intimidators.” DFRL Lab, blog. https://medium.com/dfrlab/botspot-the-intimidators-135244bfe46b

 

7.   Schreckinger, Ben. “How Russia Targets the U.S. Military.” Politico, June 12, 2017. https://www.politico.com/magazine/story/2017/06/12/how-russia-targets-the-us-military-215247

 

8.   NATO Strategic Communications Center of Excellence (COE). “Internet Trolling as a Tool of Hybrid Warfare: The Case of Latvia.” 2017. https://www.stratcomcoe.org/internet-trolling-hybrid-warfare-tool-case-latvia-0/ (pdf: https://www.stratcomcoe.org/download/file/fid/3353)

 

KEYWORDS: C4ISR; Cyber Terrorism; Hybrid; Cyborg; Smart Botnets; Information Operations

 

 

** TOPIC NOTICE **

NOTICE: The data above is for casual reference only. The official DoD/Navy topic description and BAA information is available at https://sbir.defensebusiness.org/

These Navy Topics are part of the overall DoD 2019.2 SBIR BAA. The DoD issued its 2019.2 BAA SBIR pre-release on May 2, 2019, which opens to receive proposals on May 31, 2019, and closes July 1, 2019 at 8:00 PM ET.

Between May 2, 2019 and May 30, 2019 you may communicate directly with the Topic Authors (TPOC) to ask technical questions about the topics. During these dates, their contact information is listed above. For reasons of competitive fairness, direct communication between proposers and topic authors is not allowed starting May 31, 2019
when DoD begins accepting proposals for this BAA.


Help: If you have general questions about DoD SBIR program, please contact the DoD SBIR Help Desk at 800-348-0787 or via email at sbirhelpdesk@u.group