Using Behavioral AI to Actively Monitor Threat Landscapes

KEY TAKEAWAYS

Cybersecurity has seen the emergence of intricate and perilous threats, such as phishing and ransomware. Nevertheless, behavioral AI has emerged as a potent ally in the battle against these threats, utilizing behavioral analysis of digital entities to identify and counter risks. Nevertheless, the synergy between humans and behavioral AI remains vital for effective threat detection and removal.

Cyber threats have become more subtle, sophisticated, and dangerous — but cybersecurity has a weapon in the form of artificial intelligence (AI).

Advertisements

Modern cyber threats take the form of phishing, ransomware, Denial-of-Service (DoS), malware, and spyware — and they are deceptive and effective.

Behavioral AI, as the name indicates, analyses the behavior of objects such as a system, files, emails, or attachments to identify and flag or remove threats.

Advertisements

For example, behavioral AI might identify and flag aberrations in patterns when a dormant account in a financial institution suddenly becomes hyperactive and receives multiple high-value transactions.

Such events may not only evade standard anti-virus solutions but also be dangerous.

However, the role of human beings in removing threats will be equally important. In fact, in many cases, behavioral AI cannot succeed without the cooperation of human beings.

Advertisements

What is Behavioral AI?

A computer system has various entities, such as the user, endpoint devices such as smartphones or laptops, cloud services, files and data, and network traffic, to name a few.

All entities can be compromised at different times and put the computer system or institution in severe danger.

A typical example that many of us might have experienced is Google blocking access to a website because it thinks it is receiving unusual traffic. Though Google may often confuse a normal situation with an abnormality, this is an example of its AI systems at work. Its AI systems analyze the traffic and flag any event it thinks is an anomaly or aberration. This is behavioral AI at work – using AI techniques to analyze and understand the behavior of various entities in a computer system.

Behavioral AI does behavioral modeling, anomaly detection, analytics of user and entity behavior, threat and phishing detection, automated responses, and more — a sophisticated form of countering cyber threats.

Just like human beings can identify aberrations or changes in the behavior of people they know, behavioral AI can identify aberrations from the baseline in the behavior of entities in the computer system.

The Role of Behavioral AI in Combating Threats

Behavioral AI is different from the standard cybersecurity approach when handling threats because while the traditional approach can handle known threats, behavioral AI can handle both known and unknown threats in real-time.

Behavioral AI is trained on cyber threats with vast volumes of data streams that enable it to continuously learn about evolving forms of threats.

So, when it identifies a threat, it raises an alarm or removes the threat through an automated system.

Automated removal of threats and faster identification marks another difference between the traditional approach and the behavioral AI approach.

The traditional approach involves identifying the threat and raising an alarm, following which the threat is removed manually. This is a time-consuming process.

Behavioral AI’s role can be summed up in the following ways:

  • Identifying malware in both labeled and unlabeled data. While labeled data provides a base for identifying suspicious data, unlabeled data doesn’t have a base, and the behavioral AI learns about them on the go.
  • Detecting phishing attempts. Phishing tricks have been evolving and becoming more subtle. For example, emails with malicious content, such as links or attachments, almost look identical to genuine emails. AI can identify even such emails because it has been learning about such content.
  • Providing network security. Computer systems receive substantial traffic volumes, and sophisticated threats can camouflage themselves in the guise of regular traffic. However, AI can identify such threats because it has been constantly learning about them.

Case study: AI in Action

A Fortune 500 telecom company introduced AI to classify encrypted data that flows to their application categories. The main problems the company faced were:

  • Manual labeling of traffic data proved too slow and involved precious resources.
  • Network traffic was analyzed based on a static set of rules, making the system vulnerable to suspicious traffic data that didn’t match the rules.
  • The existing system struggled to manage changing data distributions, such as responding to alarms or network problem tickets.
  • The company needed multiple tools to provide security to its computer system, which was expensive and difficult to manage.

AI significantly changed the results post-deployment.

  • Before AI, the system could produce an initial subset of 2,000 ground-truth labeled examples, but after AI, it produced 198,000 additional programmatically-labeled examples.
  • The AI model was 26.2% more efficient than its predecessor.
  • AI was 77.3% more accurate than the rules-based approach of the former system deployed by the company.

Limitations

AI has redefined cybersecurity management, and many case studies have established its usefulness. However, AI isn’t a foolproof solution, at least not yet.

It’s bound by limitations that raise questions about its efficacy, including:

  • AI is an evolving technology and still struggles to offer precise solutions to cyber threats. While AI is being deployed to counter cyber threats, questions are being raised about its output and its reliability in countering threats.
  • AI is not yet robust enough to handle the series of complex actions needed to recover from attacks. One reason for this is the lack of precision and accuracy that doesn’t make it trustworthy enough to the engineers.
  • Cyber attackers also use AI, making the threats more sophisticated and potent.

The Bottom Line

We need to remember that AI is still an evolving technology.

The limitations are real, and organizations are facing the question of how much they should trust AI. Yet, there is proven benefit in deploying AI as part of an arsenal of cybersecurity tactics.

The probable best way forward is to not get carried away with hype, objectively assess the capabilities of AI versus traditional systems, and find a combination of both that suits you or your organization.

Advertisements

Related Terms

Advertisements
Kaushik Pal

Kaushik is a technical architect and software consultant, having over 23 years of experience in software analysis, development, architecture, design, testing and training industry. He has an interest in new technology and innovation areas. He focuses on web architecture, web technologies, Java/J2EE, open source, WebRTC, big data and semantic technologies. He has demonstrated his expertise in requirement analysis, architecture design & implementation, technical use case preparation, and software development. His experience has spanned different domains like insurance, banking, airlines, shipping, document management and product development, etc. He has worked with a wide variety of technologies starting from mainframe (IBM S/390),…