Collaborative Research: EAGER: SaTC AI-Cybersecurity: Just-in-Time AI-Driven Cyber-Abuse Education in Social Networks Grant

Collaborative Research: EAGER: SaTC AI-Cybersecurity: Just-in-Time AI-Driven Cyber-Abuse Education in Social Networks .

abstract

  • Social networks encourage casual interactions and expose users to a variety of forms of cyber abuse, which are known to have negative socio-psychological effects. Previous work has shown that only a fraction of cyber abuse victims adopt self-protective behaviors. This may occur because some victims lack the background knowledge required to identify cyber abuse and to assert appropriate protective behaviors. While education can be effective in this regard, classroom delivery may fail to reproduce the diverse and dynamic context of cyber abuse, making it difficult for students to effectively translate knowledge into practice. This project seeks to increase the adoption of self-protective behaviors by integrating educational content into social networking interactions. Artificial intelligence (AI) techniques will be used to optimize the placement and timing of educational content. The project has the potential to improve the security and privacy of vulnerable social network users.The project team will leverage their expertise in cybersecurity, AI, and education to investigate, develop and evaluate a new educational framework that provides just-in-time awareness training to identify and respond appropriately to cyber abuse when using social networks. First, the team will develop AI-based solutions to detect and classify cyber abuse based on abuse traces in the accounts of the users involved. The team will also leverage data and feedback collected from study participants to build a ground-truth dataset of instances and timelines of cyber abuse. Second, the team will design and implement targeted learning content and user interface nudges to deliver the knowledge required to make safer decisions in social network interactions. Third, the team will develop AI-based techniques to determine the ideal placement of learning content that improves user adoption of self-protective behaviors. Finally, the unique features of Facebook will be exploited to design evaluation experiments and educational outcomes-based techniques that capture user behaviors in the context of their regular Facebook interactions.This project is supported by a special initiative of the Secure and Trustworthy Cyberspace (SaTC) program to foster new, previously unexplored, collaborations between the fields of cybersecurity, artificial intelligence, and education. The SaTC program aligns with the Federal Cybersecurity Research and Development Strategic Plan and the National Privacy Research Strategy to protect and preserve the growing social and economic benefits of cyber systems while ensuring security and privacy.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

date/time interval

  • May 1, 2021 - April 30, 2023

sponsor award ID

  • 2114911

contributor