Since the dawn of the internet, companies have been fighting to stay ahead of cybercriminals. Artificial intelligence (AI) and machine learning have made this job easier by automating complex processes for detecting attacks and reacting to breaches. However, cybercriminals are also using this technology for their own malicious purposes.

More and more hackers are exploiting cognitive technologies to spy, take control of Internet of Things (IoT) devices and carry out malicious activities. CSO magazine called 2018 “the year of the AI-powered cyberattack”. For example, smart malware bots are now using AI to collect data from thousands of breached devices and can learn from that information to make future attacks more difficult to prevent and detect.

As hackers weaponize AI, cybersecurity professionals must fight fire with fire by using cognitive technology to identify and prevent attacks.

Sophisticated phishing at scale

Neural networks, modeled after the human brain, can be used to automate “spear phishing”, the creation of phishing emails or tweets that are highly personal and target specific users. According to research by Blackhat, automated spear phishing had between a 30 to 66 percent success rate, which is 5 to 14 percent higher than large-scale traditional phishing campaigns and comparable with manual spear phishing campaigns.

Automation enables attackers to run spear phishing campaigns at an alarmingly large scale. However, companies are using the capabilities of AI as a countermeasure.

According to a recent Ponemon study, 52 percent of companies are looking to add in-house AI talent to help them boost their cybersecurity efforts, and 60 percent said AI could provide deeper security than purely human efforts. That’s why new security solutions such as IBM QRadar use machine learning to automate the threat detection process, helping cyber incident investigation and response efforts get started as much as 50 times faster than before.

CAPTCHA and authentication concerns

Another area in which AI tools are already helping cybercriminals do their dirty work is in breaking complex codes, whether it’s CAPTCHA or usernames and passwords. Using processes such as optical character recognition, the software can identify and learn from millions of images, eventually gaining the ability to recognize and solve a CAPTCHA. Similarly, hackers are applying the same optical character recognition combined with the ability to automate login requests to test stolen usernames and passwords across multiple sites.

Fighting back against such large-scale attacks requires leaning on these same AI technologies. One way to do this is to use learning-enabled technology to understand what is normal for a system, then flag unusual incidents for human review. Security professionals need AI-based monitoring solutions to provide automated help and identify which alerts pose a real and immediate risk.


Smart malware, which “learns” how to become less detectable, is also posing a significant threat. Defeating normal malware is typically done by “capturing” the malware and reverse engineering it to figure out how it works. However, in smart malware, it is more difficult to analyze how the neural network makes decisions on who to attack.

While reverse-engineering smart malware remains challenging, neural networks have been successful at recognizing malicious domains created by a domain generation algorithm (DGA), which creates pseudo-random domain names. A smart DGA keeps changing to stay ahead of attempts to thwart it, but, likewise, a smart neural network will continue to learn the strategies deployed by hackers and how to defeat them.

Fight security threats before they happen

One of the most powerful aspects of security enabled by AI and machine learning is the ability to uncover patterns and learn from unstructured data. As a result, these tools can provide security professionals with the means to combat attacks, as well as insights into emerging threats and recommendations on how to defend against impending incidents. Additionally, machine learning can help locate vulnerabilities that may be difficult for human security teams to find.

Cybercriminals are already using AI to launch larger-scale, more sophisticated attacks. Here’s the good news: companies can fight back by using these same technologies. If your organization has been considering implementing AI but hasn’t yet put a plan in place, the time is now, and the business case has arrived. Cognitive technologies such as neural networks and automated security monitoring solutions can help bring your business’s defenses into the cyber age and give you the most cutting-edge weapons to defend against emerging threats.

Discover the ways that IBM Cloud Private for Data can enable security by supporting the development and deployment of AI and machine learning capabilities.

Was this article helpful?

More from Artificial intelligence

Generative AI meets application modernization

2 min read - According to a survey of more than 400 top IT executives across industries in North America, three in four respondents say they still have disparate systems using traditional technologies and tools in their organizations. Furthermore, the survey finds that most executives report being in the planning or preliminary stages of modernization. Maintaining these traditional, legacy technologies and tools, often referred to as “technical debt,” for too long can have serious consequences, such as stalled development projects, cybersecurity exposures and operational…

Accelerating responsible AI adoption with a new Amazon Web Services (AWS) Generative AI Competency

3 min read - We’re at a watershed moment with generative AI. According to findings from the IBM Institute for Business Value, investment in generative AI is expected to grow nearly four times over the next two to three years. For enterprises that make the right investments in the technology it could deliver a strategic advantage that pays massive dividends. At IBM® we are committed to helping clients navigate this new reality and realize meaningful value from generative AI over the long term. For our…

How IBM and the Data & Trust Alliance are fostering greater transparency across the data ecosystem

2 min read - Strong data governance is foundational to robust artificial intelligence (AI) governance. Companies developing or deploying responsible AI must start with strong data governance to prepare for current or upcoming regulations and to create AI that is explainable, transparent and fair. Transparency about data is essential for any organization using data to drive decision-making or shape business strategies. It helps to build trust, accountability and credibility by making data and its governance processes accessible and understandable. However, this transparency can be…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters