Armed with Artificial Intelligence, Hackers and the Guardians of IT Square Off on a New Digital Battleground

Tom Lemon, Managing Director Technology Consulting, Protiviti UK

Over the past few years, the transformational promise of artificial intelligence (AI) has begun to materialize in the business and consumer worlds. AI enables computers and machines to simulate intelligent human behavior. We’re seeing shippers turn to self-driving vehicles for 24/7 delivery, consumers adopt household voice assistants to make life easier, and surgeons use smart robotics to enhance precision and outcomes in the operating room.

Now AI has become the latest front in the war to protect systems, programs and networks from digital attacks, and it has galvanized the attention of a cybersecurity industry that’s constantly seeking innovative tools. Venture capital funds are pouring money into cybersecurity startups employing AI, such as machine learning, for example, and the growing emphasis on utilizing AI to prevent digital intrusions is fueling debate and discussion at high-profile cybersecurity industry events such as this summer’s Black Hat CISO Summit. So far, about one-third of businesses are relying on AI such as machine learning for cybersecurity, and almost four in 10 are relying on automation, according to one survey.

Unfortunately, AI is also giving sophisticated hackers the ability to launch advanced assaults that can detect and evade defensive tactics, and you can bet that these malicious agents are eager to unleash their new weapon. In fact, IBM’s research division recently developed a “highly targeted and evasive” AI powered-attack, coined DeepLocker, to determine how current malware methods can incorporate AI and to understand the threats posed to businesses and consumers alike.

This virtual “battle of the bots” between cyber thugs and those tasked with blocking their invasions is only going to accelerate given the pace of technological advances and the skyrocketing number of interconnected digital devices. For this reason, we believe that it is important for organizations to include AI in their cybersecurity strategies. However, they should view AI as one more weapon in their arsenal, not as a silver bullet that replaces other measures to prevent breaches. Nor, in our estimation, is the technology about to remove humans from critical security roles just yet. Instead, AI is helping to prioritize areas of concern and to identify and remediate potential threats more quickly and consistently.

Strengthen Your Fortress

Imagine a scenario in which 50 security analysts are constantly searching for threats across thousands of events within a company’s IT environment. Not only is that a pricey proposition, but it would almost certainly fail to spot every danger. AI technologies such as machine learning, on the other hand, can quickly scour data and direct analysts to patterns of abnormal or suspicious machine and/or human behaviors. An AI-human collaboration in this situation would give organizations a greater chance of swiftly identifying attacks and shutting them down.

New automation technologies designed to respond to adverse incidents are also emerging. Say a company’s AI-enabled security tool identifies a “bad” event on a server. In virtually one fell swoop, the automation technology can then systematically stop or isolate the server to limit damage and snap a forensic image of it. The image could then be spun into a quarantine zone for examination.

Secure coding practices are critical in the battle to reduce vulnerabilities inherent in software. In a growing and positive trend, we are seeing companies implement a form of machine learning into the early stages of the software development process to predict when programmers are most likely to introduce bugs.

A Few Considerations

These are certainly exciting prospects and they have tremendous potential to help organizations quickly identify and terminate cyber attacks or prevent them from being successful. But when contemplating the introduction of AI into a cybersecurity program, IT security executives need to remain cognizant of the strategy’s shortcomings. In all likelihood, hackers will be using AI themselves to fool and defeat an organization’s AI defense. No technology is confined to one side for long, and so far hackers have been the ones raising the stakes.

Plus, AI’s ability to reason and learn requires data, and it cannot analyze what it does not know. The machine-learning technology for software development, for example, can only “learn” bad coding behaviors by experiencing them, and only large organizations are likely to have a sufficient history of code for adequate AI “training.” Similarly, the technology will not necessarily recognize coding bugs that have not been seen in some previous form. More advanced deep-learning algorithms may provide an answer in the future, enabling the technology to “teach” itself to recognize poor coding practices and behaviors that it has not seen before.  In the meantime, organizations must continue to deploy layers of defense, and AI is just one of them.

Also, consider talent. The market for cybersecurity talent has become increasingly competitive, and hiring qualified IT security specialists with AI knowledge presents companies with yet another challenge. The good news is that the pool of aspiring skilled workers is quite large. Consequently, to attract talent and gain a potential advantage over competitors, many organizations are cultivating early relationships with students through internships, sponsored degree programs or apprenticeships. This connection provides the organization with access to future employees, and potential employees with the prospect of an enjoyable, challenging and exciting career path – qualities that typically fuel loyalty and a desire to contribute to the success of the company.

A Decision to Make

AI in cybersecurity is more than an interesting idea to think about – it is already advancing the efficiency and effectiveness of cybersecurity, as highlighted in the examples we gave earlier. While the technology is clearly not the be-all and end-all of an IT defense strategy, we can be certain that as attackers find ways to use AI to launch malicious digital incursions, it will become a routine part of cyber defense, as well. Organizations that move early to incorporate AI into their cybersecurity regimen and thoughtfully foster a pipeline of AI cybersecurity experts will greatly decrease the risk of suffering the debilitating effects of cyber attacks.

Add comment