AI and ML in Cybersecurity Part 2: QA Basics and Seatbelts

AI

Last week, we illustrated a handful of flaws for Artificial Intelligence (AI) and Machine Learning (ML) algorithms that became revealed once they were applied to real-world situations.  Soliton’s AI expert, Dr. Tedd Hadley offered an analogy that these biases were the equivalent of software bugs and described how basic Quality Assurance (QA) could be applied.  We now continue that conversation.

Tedd: So to continue, we ask the question: Why is AI/ML somehow bypassing QA, skipping careful policies and procedures that check for bugs, and somehow short circuiting methodical testing and retesting? It surely shouldn’t be! But here are some possible explanations. 

(1) Since the program is generated automatically by “math,” some people may assume the algorithms are free from human imperfections. 

(2) QA policies for AI/ML don’t account for the fact that AI-designed behavior completely lacks any background knowledge of the world we see.  To us, it seems dumb to not think of kangaroos, but if that was never part of the inputs, there’s no way for the algorithm to try and account for it.

(3) It’s often too hard to debug AI/ML decisions and easier to revert to (1) on faith.  Writing Explainable AI isn’t easy.

In response to (1) & (2), as an industry, we do QA for traditional programming even for the best programmers.  For AI/ML, we need to do QA as if the algorithms are fundamentally inexperienced human programmers.  Finally, to counter issue (3), we need Explainable AI.  There’s a lot of work going on in this area (as covered by Cliff Kuang of the NY Times), and in our own development at Soliton, we’re striving to make every conclusion/classification fully explainable.  If we can’t explain it, it is basically useless.  Those developing the QA must also recognize that one person’s exceptions may be other person’s normal.  For example, driving on the right-hand side of the road in San Francisco creates a completely different reaction from oncoming drivers than driving on the right-hand side of the road in London.  Our QA must extend beyond the programming to the inputs and outputs in a broad and comprehensive perspective.   

So we have to rely upon humans again?  Don’t we have cognitive biases that lead to biases in the first place?

Tedd: Cognitive biases are the bugs of evolutionary-designed biological neural networks, so I think biases in artificial neural networks can be thought of as bugs in the same way.  We dealt with the former using traditional QA, we need the same and more for the latter.  Extensive testing using traditional QA practices as well as new QA specifically designed for AI/ML should be part of every AI product development plan.  New QA practices for AI/ML are being steadily discovered and we should incorporate them as rapidly as possible. 

It certainly sounds like the future is bright, but not quite living up to the hype.  It also seems like we need to continue to take basic precautions for failure. In the 1970’s we used to have people say they wouldn’t need seat belts if other people were better drivers, but after seat belts became standard, we recognized the value.  Maybe AI will be those better drivers we imaged, but I’m still going to wear my seat belt!

Tedd: AI/ML right now is suffering from poor QA and poor explainability, and this results in biases, or as you might say, bugs. We should assume the worst about AI/ML products unless there is transparency into the QA design process and transparency into each AI/ML decision or classification.   Until then, your advice is well taken.

Thanks Tedd!  So this leads us to ask, what may be the seat belts for IT Security?  We recommend that any use of AI- or ML-enabled software should be layered with traditional IT Security.  It is another dimension of the layering we are deploying to protect our IT environments.  Meanwhile, as a security vendor, we will continue to develop on the full spectrum of this new dimension.  As we work on our machine learning for detecting anomalies, we’ll also continue to bring out simple solutions such as the MAC address white list at the core of our NetAttest LAP One technology – nothing fancy, it just works.

 

For a general overview of Artificial Intelligence vs. Machine Learning vs. Deep Learning, please check out Data Science Central.

For information about NetAttest LAP One, please click here to download the data sheet or check out the NetAttest LAP One webinar.

https://www.nytimes.com/2017/11/21/magazine/can-ai-be-taught-to-explain-itself.html

https://www.datasciencecentral.com/profiles/blogs/artificial-intelligence-vs-machine-learning-vs-deep-learning