[ad_1]
Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Unless you purposely avoid social media or the internet completely, you’ve likely heard about a new AI model called ChatGPT, which is currently open to the public for testing. This allows cybersecurity professionals like me to see how it might be useful to our industry.
The widely available use of machine learning/artificial intelligence (ML/AI) for cybersecurity practitioners is relatively new. One of the most common use cases has been endpoint detection and response (EDR), where ML/AI uses behavior analytics to pinpoint anomalous activities. It can use known good behavior to discern outliers, then identify and kill processes, lock accounts, trigger alerts and more.
Whether it’s used for automating tasks or to assist in building and fine-tuning new ideas, ML/AI can certainly help amplify security efforts or reinforce a sound cybersecurity posture. Let’s look at a few of the possibilities.
AI and its potential in cybersecurity
When I started in cybersecurity as a junior analyst, I was responsible for detecting fraud and security events using Splunk, a security information and event management (SIEM) tool. Splunk has its own language, Search Processing Language (SPL), which can increase in complexity as queries get more advanced.
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
That context helps to understand the power of ChatGPT, which has already learned SPL and can turn a junior analyst’s prompt into a query in just seconds, significantly lowering the bar for entry. If I asked ChatGPT to write an alert for a brute force attack against Active Directory, it would create the alert and explain the logic behind the query. Since it’s closer to a standard SOC-type alert and not an advanced Splunk search, this can be a perfect guide for a rookie SOC analyst.
Another compelling use case for ChatGPT is automating daily tasks for an overextended IT team. In nearly every environment, the number of stale Active Directory accounts can range from dozens to hundreds. These accounts often have privileged permissions, and while a full privileged access management technology strategy is recommended, businesses may not be able to prioritize its implementation.
This creates a situation where the IT team resorts to the age-old DIY approach, where system administrators use self-written, scheduled scripts to disable stale accounts.
The creation of these scripts can now be turned over to ChatGPT, which can build the logic to identify and disable accounts that have not been active in the past 90 days. If a junior engineer can create and schedule this script in addition to learning how the logic works, then ChatGPT can help the senior engineers/administrators free up time for more advanced work.
If you’re looking for a force multiplier in a dynamic exercise, ChatGPT can be used for purple teaming or a collaboration of red and blue teams to test and improve an organization’s security posture. It can build simple examples of scripts a penetration tester might use or debug scripts that may not be working as expected.
One MITRE ATT&CK technique that is nearly universal in cyber incidents is persistence. For example, a standard persistence tactic that an analyst or threat hunter should be looking for is when an attacker adds their specified script/command as a startup script on a Windows machine. With a simple request, ChatGPT can create a rudimentary but functional script that will enable a red-teamer to add this persistence to a target host. While the red team uses this tool to aid penetration tests, the blue team can use it to understand what those tools may look like to create better alerting mechanisms.
Benefits are plenty, but so are the limits
Of course, if there is analysis needed for a situation or research scenario, AI is also a critically useful aid to expedite or introduce alternative paths for that required analysis. Especially in cybersecurity, whether for automating tasks or sparking new ideas, AI can reduce efforts to reinforce a sound cybersecurity posture.
However, there are limitations to this usefulness, and by that, I am referring to complex human cognition coupled with real-world experiences that are often involved in decision-making. Unfortunately, we cannot program an AI tool to function like a human being; we can only use it for support, to analyze data and produce output based on facts that we input. While AI has made great leaps in a short amount of time, it can still produce false positives that need to be identified by a human being.
Still, one of the biggest benefits of AI is automating daily tasks to free up humans to focus on more creative or time-intensive work. AI can be used to create or increase the efficiency of scripts for use by cybersecurity engineers or system administrators, for example. I recently used ChatGPT to rewrite a dark-web scraping tool I created which reduced the completion time from days to hours.
Without question, AI is an important tool that security practitioners can use to alleviate repetitive and mundane tasks, and it can also provide instructional aid for less experienced security professionals.
If there are drawbacks to AI informing human decision-making, I would say that anytime we use the word “automation,” there’s a palpable fear that the technology will evolve and eliminate the need for humans in their jobs. In the security sector, we also have tangible concerns that AI can be used nefariously. Unfortunately, the latter of these concerns has already been proven to be true, with threat actors using tools to create more convincing and effective phishing emails.
In terms of decision-making, I think it is still very early days to rely on AI to arrive at final decisions in practical, everyday situations. The human ability to use universally subjective thinking is central to the decision process, and thus far, AI lacks the capability to emulate those skills.
So, while the various iterations of ChatGPT have created a fair amount of buzz since the preview last year, as with other new technologies, we must address the uneasiness it has generated. I don’t believe that AI will eliminate jobs in information technology or cybersecurity. On the contrary, AI is an important tool that security practitioners can use to alleviate repetitive and mundane tasks.
While we’re witnessing the early days of AI technology, and even its creators appear to have a limited understanding of its power, we have barely scratched the surface of possibilities for how ChatGPT and other ML/AI models will transform cybersecurity practices. I’m looking forward to seeing what innovations are next.
Thomas Aneiro is senior director for technology advisory services at Moxfive.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
[ad_2]
Source link