BLOOMINGTON, Ind. – For decades, movies and books have portrayed the potential of artificial intelligence and researchers continue to explore ways in which computers outpace humans. Professors at the Indiana University Kelley School of Business are conducting significant research and teaching about how AI is used, including increasingly in cybersecurity.
One of those researchers, Sagar Samtani, assistant professor of operations and decision technologies and a Grant Thornton Scholar, said artificial intelligence is increasingly efficient and effective in improving cybersecurity protection. He believes AI’s role in cybersecurity has rapidly become a critical concern worldwide.
“AI’s value for cybersecurity is in its ability to automatically sift through large quantities of data, including malware, log files and the Dark Web, and detect patterns missed by manual analysis,” said Samtani, also a fellow at the IU Center for Applied Cybersecurity Research.
Samtani came to Kelley in 2020. His research centers around AI for cybersecurity and cyber threat intelligence. He also studies deep learning, network science, and text mining approaches for smart vulnerability assessment, scientific cyberinfrastructure security, open-source software security, and Dark Web analytics.
He has published more than 40 journal, conference and workshop papers on these topics in leading venues, including MIS Quarterly and the Journal of Management Information Systems. His research has received funding from the NSF Cybersecurity Innovation for Cyberinfrastructure (CICI), CISE Research Initiation Initiative (CRII), and Secure and Trustworthy Cyberspace Education (SaTC-EDU) programs.
To help explain both the potential of AI in cybersecurity, as well as where it falls short, we asked him to provide a few “sound bytes.” Below is a brief Q&A with Samtani:
Where does AI measure up to its promise?
“AI has significant success in cybersecurity tasks such as malware detection, malware evasion, Dark Web analytics (detecting emerging threats, modeling hacker language) for cyber threat intelligence, vulnerability detection, and threat modelling. Areas such as malware detection and evasion have benefited greatly due to significant advances in reinforcement learning (agents) and generative adversarial networks.
“Dark web analytics tasks such as detecting emerging threats or modeling hacker language are achievable due to the advent of computational diachronic linguistics and advanced approaches to automatically generate embeddings of hacker terminology. Knowledge graphs have played a pivotal role in automatically modeling threats for organizations.”
When it almost does?
“Many AI algorithms were designed outside of the context of cybersecurity. Consequently, many existing AI algorithms require significant extensions or adaptation to operate on cybersecurity data. Consider the task of generating cyber threat intelligence reports. Existing approaches of executing named entity recognition/extraction are designed to operate on text that does not include cybersecurity terms, concepts and technology names, for example.
“Consequently applying “off the shelf” approaches may result in decent performance or help automate some cybersecurity tasks but can often fall short in attaining the success of carefully customized AI-enabled approaches.”
Where does it fail completely in real world cybersecurity tasks?
“One of the common expectations for AI is that it will entirely replace humans. This is one of the key shortcomings in deploying AI for real-world cybersecurity tasks. AI, in its current form, cannot replace all of the day-to-day activities of many security analysts. Similarly, AI also struggles in applying ‘common sense’ when executing most cybersecurity tasks.”
You’re relatively new to Kelley and IU. What was it about the capabilities and resources here that attracted you? How you feel about the opportunities that you have as a Kelley faculty member to research and teach with colleagues across the university?