The Limits of AI and ML in Cybersecurity Solutions

The Limits of AI and ML in Cybersecurity Solutions


The scarcity of cybersecurity skills, the growing number and sophistication of attacks and the gangs of smart and aggressive cybercriminals have created a perfect storm for cybersecurity teams. Defending networks, endpoints, and data seems like a Herculean task some days. The advent of artificial intelligence and machine learning (AI / ML) tools has offered some relief and organizations have quickly embraced technology. The Pillsbury Law investigation found that half of executives believed that AI and ML offered the best defense against nation-state cyberattacks.

However, while the study stated that automating threat detection through AI improves security, the technology alone will not solve all of your cybersecurity issues. In fact, these technologies can make cybersecurity systems weaker in some respects.

DevOps Connect: DevSecOps @ RSAC 2022

“In part, this is because there is a nascent but potentially growing threat landscape in which malicious actors use AI to penetrate weak systems or exploit the complexities of AI-dependent cybersecurity systems,” he says. the report. In other words, cybercriminals often use the same technologies to attack and penetrate systems that organizations use for defense.

As more organizations implement AI and ML into their security systems, they must also understand the limitations of the technology.

Myths surrounding AI in cybersecurity

The biggest misconception is that AI / ML will immediately replace a trained security analyst, said Andrew Hay, COO of LARES Consulting. “AI / ML is only as valuable as the source data that is entered into the machine.” Humans dictate the data entered into the system so that machine learning can create patterns and follow behaviors that can detect abnormalities. But it goes further. AI can find potential problems, but it’s up to a live person to make a decision about whether an alert is true or false positive and then generate a response.

“Maybe this could happen in the future, or after extensive training for the organization’s environment,” Hay said. “Regardless of what the seller tells you, you can’t just leave a box and have it replace two or three trained security personnel.”

Another myth is the real effectiveness of AI systems as a cybersecurity solution. At one end is the argument that AI and ML are the panacea for all things cybersecurity-related, explained Dr. Sohrob Kazerounian, head of AI research at Vectra, while the other end is the argument that AI and ML play no role in cybersecurity.

“The real truth is, unfortunately, much less interesting and not particularly citable by marketing departments. The fact is that AI and ML are not, in and of themselves, a silver bullet for your security operations center. (SOC), ”Kazerounian said. “Not using it, however, would sadly leave your SOC in the dark when it comes to a wide range of current and future attacks.”

Simply put, cybersecurity solutions that do not adopt AI or ML cannot keep pace with a changing threat landscape; on the other hand, solutions that only make use of generic AI and ML techniques developed without security context and domain specificity tend to look only for statistical anomalies in one environment.

“This creates an overload of attention and operational and distracts from the true behaviors of attackers, who are often designed to look benign by design,” Kazerounian said.

AI and ML are different technologies

There is a tendency to talk about AI and ML as a unified technology, but they are different. As Microsoft explained, “An intelligent computer” uses artificial intelligence to think like a human and perform tasks on its own. Machine learning is how a computer system develops its intelligence. ” Without knowing how each technology works or how it brings benefits, you run the risk of limiting the effectiveness of the technology.

Organizations should investigate whether the technology they need will do what a single human cannot do, Kazerounian advised. AI and ML should save time for human analysts, not distract them from actual attacks.

“Surrounding yourself on whether something is AI or ML is very similar to worrying about whether submarines swim or not,” Kazerounian said. “In the end, what really matters is whether the solution works or not.”

Integration with legacy systems

The introduction of AI and ML as security solutions will certainly offer better protection, but don’t expect the technologies to integrate seamlessly.

“Extensive data manipulation and integration will be needed to effectively implement new security solutions to older systems,” Hay said.

In addition, Hay added, AI and ML do not work as advertised without a thorough training of proper data sources. Technologies and their users must undergo extensive training and adjustments for the specific customer environment.

Therefore, while you should adopt AI and ML to improve your cybersecurity system, it is important to understand that it may not be the answer to all your needs. Like all technology, there are limitations to what you can and cannot do.



Source link

Related post

Emirates News Agency – WGS report addresses how governments can create a more systematic and rigorous approach to skills trainings

Emirates News Agency – WGS report addresses how governments…

DUBAI, 2nd October 2022 (WAM) – A report published by the World Government Summit Organization identifies how today’s employers are failing…
Try one of the easiest Python 3 beginner courses for $40

Try one of the easiest Python 3 beginner courses…

Offer price and availability subject to change after publication. TL;DR: Starting October 2, you can sign up for the Premium Python…
Crelor Space Launches Digital Training, Learning, Info Platform For Upskill

Crelor Space Launches Digital Training, Learning, Info Platform For…

In today’s world, the journey towards personal development and professional advancement is undoubtedly difficult for many. As a result, it’s not…

Leave a Reply

Your email address will not be published.