Service insights
In cybersecurity, the human touch makes all the difference
In cybersecurity, the human touch makes all the difference
The last year has seen a lot of ink spilled about the power of artificial intelligence – from the fascinating capabilities of ChatGPT to the use of AI-generated voices and visuals in music, tv and movie production.
Unsurprisingly, as the potential promise of AI extends across a growing array of applications, it has also driven a sea change in the world of cybersecurity. New solutions are leveraging advanced AI systems to support faster detection and remediation, as well as the automation of ongoing cyber activities with a view to delivering more resilient defence that keeps businesses secure.
But amidst all the AI hype, it’s important to distinguish between the perceived value AI can deliver and the reality of its application and limitations.
In this blog, we’ll look at what AI can offer to cybersecurity, some of the challenges it faces, and the best path forward for businesses to stay secure.
Where AI is adding value today?
AI-powered tools have been commonplace in cybersecurity for quite some time and for good reason. As part of your cyber arsenal, AI can help teams with limited resources extend their reach and scale.
The detection and quarantine of certain threats can be automated, as AI platforms leverage the insights of supplied datasets and previous attack indicators to assess new threats within the confines of specific parameters. Crucially, this can be done with a level of speed and responsiveness that is difficult to achieve with in-house teams, especially those limited to business-hours monitoring only.
For the business, this alleviates some of the admin burden on internal resource, offsetting the threat monitoring and detection process to allow more focus on other areas.
The challenges of AI
While the benefits of AI might tempt businesses to weigh their cyber strategy in AI-powered tools, an approach that relies on technology alone is not without its flaws. While there is little doubt that AI-powered technologies are a valuable inclusion in cyber posture, there are challenges that prevent them from being the silver bullet some may perceive them to be.
False positives occur when an AI incorrectly flags an event as malicious, usually as a result of the blanket application of pre-defined rules. These require human intervention – both to untangle what happened and identify when an alert is a false positive, and to fine-tune the rules that the AI applies to try and minimise future mis-alerts.
Similarly, AI security solutions are unable to bring the same understanding of context that a human security specialist can. For example, an AI might view a new technology being added to an organisation’s security stack as a threat actor, and flag its ordinary activity as malicious. Without a human overseeing the system to give this context, a business could end up investing heavily in countering a non-existent threat. At the same time, this lack of context means that persistent threats might be interpreted as a baseline level of activity, allowing cybercriminals to act undetected.
Finally, cybercriminals can target their attacks to work in the blindspots of AI – as AI can only understand and interpret based on specific datasets, cybercriminals can tailor their attacks to work in ways beyond an AI’s understanding – going entirely undetected unless a human notices their activity and responds accordingly.
AI in cybercrime
There’s one other key challenge to the blanket adoption of AI-based tools in cyber strategy. As the old adage says, a rising tide lifts all ships. Similarly, advancements in AI capabilities aren’t limited to helping IT teams– bad actors can take advantage of them too.
For example, the widespread availability of generative AI platforms has lowered the barrier for entry for cybercriminals to adopt more advanced – and dangerous – techniques:
- The ability to use AI to generate content has been a significant boon to cybercriminals – tools like ChatGPT allow for the automated creation of professional-sounding content for phishing emails, while AI-generated faces and deepfake technology make it easier for bad actors to conduct social engineering and bypass existing security measures.
- Combinations of AI-powered tools allow cybercriminals to automate attacks – whether that’s conducting phishing campaigns, using automated tools to brute force a username/password combination, or circumventing the guardrails built into a tool like ChatGPT and using it to enhance malicious code.
- As well as automation, AI allows bad actors to enhance their attacks using algorithms to maximise the potential payout from an organisation. For example, an AI might be trained to seek out business-critical data to encrypt with ransomware, or work against a security system to allow advanced persistent threats (APTs) to unnoticed.
A blend of human and machine
There’s no doubting the incredible potential of AI as part of cyber posture, but there’s also no doubt in our mind that the best cybersecurity strategies are built with the combination of human instincts and world-class technology.
Used well, AI allows security teams to handle their workloads far more efficiently – support fast and effective threat detection and present alerts and information for human experts to analyse and act upon.
This idea lies at the heart of our TruTrust© approach – delivering a perfect blend of human cyber expertise supported by powerful AI tools to always stay ahead of cybercriminals.
If you’d like to learn more about TruTrust©, check out this page, or, if you’re interested in how Zepko can help your business stay secure, get in touch.