Skip to main content
SHARE
Publication

Assembling a Cyber Range to Evaluate Artificial Intelligence / Machine Learning (AI/ML) Security Tools...

by Jeffrey A Nichols, Kevin D Spakes, Cory L Watson, Robert A Bridges
Publication Type
Conference Paper
Book Title
Proceedings of the 16th International Conference on Cyber Warfare and Security
Publication Date
Page Numbers
240 to 248
Issue
1
Publisher Location
Tennessee, United States of America
Conference Name
2021 International Conference on Cyber Warfare and Security (ICCWS 2021)
Conference Location
Cookeville, Tennessee, United States of America
Conference Sponsor
International Conference on Cyber Warfare and Security
Conference Date
-

In this case study, we will describe the design and assembly of a cyber security test range we have built at Oak Ridge National Laboratory in Oak Ridge, TN, USA. The range is designed to provide a flexible environment to evaluate cyber security tools—particularly those involving AI/ML—in a way that provides realistic environments and where we can control the experiments to determine the strengths and weaknesses of the tools. We have designed in the ability to repeat the evaluations, so additional tools can be evaluated and compared at a later time. The system is one that can be scaled up or down for experiment sizes. At the time of the conference we will have completed two full-scale, national, government challenges on this range. These challenges are evaluating the performance and operating costs for AI/ML-based cyber security tools for application into large, government-sized environments. These evaluations will be described, in order to provide motivation and context for various design decisions and adaptations we have made. The first challenge measured end-point security tools against 100K malware samples chosen across a range of types. The second is network detection of attempted penetration and exploitations with varying levels of covertness in a high-volume, business network. The scale of each of these challenges is requiring us to create automation systems to repeat the experiments identically for each tool. Preventing there being easy signs of malicious activity for the AI/ML tools to focus on has been a particularly interesting and challenging aspect of designing and executing these challenge events. After the events, the range continues to be used for other research such as adversarial machine learning where the repeatability, scale, and automation required for the national challenge events become essential elements for research.