Universal Mentors Association

10 ways SecOps can strengthen cybersecurity with ChatGPT

[ad_1]

Join top executives in San Francisco on July 11-12 and learn how business leaders are getting ahead of the generative AI revolution. Learn More


Security operations teams are seeing first-hand how fast attackers re-invent their attack strategies, automate attacks on multiple endpoints, and do whatever they can to break their targets’ cyber-defenses. Attackers are relentless. They see holidays, for example, as excellent opportunities to penetrate an organization’s cybersecurity defenses. As a result, SecOps teams are on call 24×7, including weekends and holidays, battling burnout, alert fatigue and the lack of balance in their lives. It is as brutal as it sounds.

As the CISO of a leading insurance and financial services firm told VentureBeat, “Since hackers constantly change their attack methods, SecOps teams are under constant, immediate pressure to protect our company from new threats. It’s been my experience that when overworked teams use siloed technology, it takes double or triple the effort … to stop fewer intrusions.”

ChatGPT shows potential for closing the SecOps gap

One of the biggest challenges of leading a SecOps team is gaining scale from legacy systems that each produce a different type of alert, alarm and real-time data stream. Of the many gaps created by this lack of integration, the most troubling and exploited is not knowing whether a given identity has the right to use a specific endpoint — and if it does, for how long. Systems that unify endpoints and identities are helping to define the future of zero trust, and ChatGPT shows potential for troubleshooting identity-endpoints gaps — and many other at-risk threat surfaces.

>>Follow VentureBeat’s ongoing generative AI coverage<<

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

Attackers are fine-tuning their tradecraft to exploit these gaps. SecOps teams know this, and have been taking steps to start hardening their defenses. These include putting least-privileged access to work; logging and monitoring every endpoint activity; enforcing authentication; and eradicating zombie credentials from Active Directory and other identity and access management systems (IAM). After all, attackers are after identities, and CISOs must stay vigilant in keeping IAM systems current and hardened to threats.  

But SecOps teams face additional challenges too, including fine-tuning threat intelligence; providing real-time threat data visibility across every security operations center (SOC); reducing alert fatigue and false positives; and consolidating their disparate tools. These are areas where ChatGPT is already helping SecOps teams strengthen their cybersecurity.

Consolidating disparate tools is helping close the identity-endpoint gap. It provides more consistent visibility of all threat surfaces and potential attack vectors. “We’re seeing customers say, ‘I want a consolidated approach because economically or through staffing, I just can’t handle the complexity of all these different systems and tools,’” Kapil Raina, vice president of zero trust, identity, cloud and observability at CrowdStrike, told VentureBeat during a recent interview.

“We’ve had a number of use cases,” Raina said, “where customers have saved money so they’re able to consolidate their tools, which allows them to have better visibility into their attack story, and their threat graph makes it simpler to act upon and lower the risk through internal operations or overhead that would otherwise slow down the response.”

Lessons learned from piloting generative AI and ChatGPT 

One lesson CISOs piloting and using ChatGPT-based systems in SecOps have learned, they tell VentureBeat, is that they must be thorough in getting data sanitization and governance right, even if it means delaying internal tests or launch. 

They have also learned to choose the use cases that most contribute to corporate objectives, and define how these contributions will be counted toward success. 

Third, they must build recursive workflows using tools that can validate the alerts and incidents ChatGPT reports, so they know which are actionable and which are false positives.

10 ways SecOps teams can strengthen cybersecurity with ChatGPT

It’s critical to know if, and how, spending on ChatGPT-based solutions strengthens the business case for zero-trust security and, from the board’s perspective, strengthens risk management. 

The CISO for a leading financial services firm told VentureBeat that it’s prudent to evaluate only the cybersecurity vendors that have large language models (LLMs). They don’t recommend using ChatGPT itself, which never forgets any data, information, or threat analysis, making its internal use a confidentiality risk.

Airgap Networks, for example, introduced its Zero Trust Firewall (ZTFW) with ThreatGPT, which uses graph databases and GPT-3 models to help SecOps teams gain new threat insights. The GPT-3 models analyze natural language queries and identify security threats, while graph databases provide contextual intelligence on endpoint traffic relationships. Other options include Cisco Security Cloud and CrowdStrike, whose Charlotte AI will be available to every customer using the Falcon platform.

Additional vendors include Google Cloud Security AI Workbench, Microsoft Security Copilot, Mostly AI, Recorded Future, SecurityScorecard, SentinelOne, Veracode, ZeroFox and Zscaler. Zscaler announced three generative AI projects in preview at its Zenith Live 2023 last month in Las Vegas.

Here are 10 ways ChatGPT is helping SecOps teams strengthen cyber-defenses against an onslaught of attacks, including ransomware, which grew 40% in the last year alone.

1. Detection engineering is proving to be a strong use case

Detection engineering is predicated on real-time security threat detection and response. CISOs running pilots say that their SecOps teams can detect, respond to, and have LLMs learn from actual versus false-positive alerts and threats. ChatGPT is proving effective at automating baseline detection engineering tasks, freeing up SecOps teams to investigate more complex alert patterns.

2. Improving incident response at scale

CISOs piloting ChatGPT tell VentureBeat that their proof of concept (PoC) programs show that their testing vendor’s platform provides actionable, accurate guidance on responding to an incident.

Hallucinations happen in the most complex testing scenarios. This means the LLMs supporting ChatGPT must keep contextual references accurate. “That’s a big challenge for our PoC as we’re seeing our ChatGPT solution perform well on baseline incident response,” one CISO told VentureBeat in a recent interview. “The greater the contextual depth, the more our SecOps teams need to train the model.”

The CISO added that it’s performing well on automating recurring incident response tasks, and this frees up time for SecOps team members who previously had to do those tasks manually.

3. Streamlining SOC operations at scale to offload overworked analysts

A leading insurance and financial services firm is running a PoC on ChatGPT to see how it can help overworked security operations center (SOC) analysts by automatically analyzing cybersecurity incidents and making recommendations for immediate and long-term responses. SOC analysts are also testing whether ChatGPT can get risk assessments and recommendations on various scripts. And they are testing to see how effective ChatGPT is at advising IT, security teams and employees on security policies and procedures; on employee training; and on improving learning retention rates.   

4. Work hard towards real-time visibility and vulnerability management

Several CISOs have told VentureBeat that while improving visibility across the diverse, disparate tools they rely on in SOCs is a high priority, achieving this is challenging. ChatGPT is helping by being trained on real-time data to provide real-time vulnerability reports that list all known and detected threats or vulnerabilities by asset across the organization’s network.

The real-time vulnerability reports can be ranked by risk level, recommendations for action, and severity level, providing that level of data is being used to train LLMs.

5. Increasing accuracy, availability and context of threat intelligence

ChatGPT is proving effective at predicting potential threat and intrusion scenarios based on real-time analysis of monitoring data across enterprise networks, combined with the knowledge base the LLMs supporting them are constantly creating. One CISO running a ChatGPT pilot says the goal is to test whether the system can differentiate between false positives and actual threats.

The most valuable aspect of the pilot so far is the LLMs’ potential in analyzing the massive amount of threat intelligence data the organization is capturing and then providing contextualized, real-time and relevant insights to SOC analysts.

6. Identifying how security configurations can be fine-tuned and optimized for a given set of threats

Knowing that manual misconfigurations of cybersecurity and threat detection systems are one of the leading causes of breaches, CISOs are interested in how ChatGPT can help identify and recommend configuration improvements by interpreting the data indicators of compromise (IoCs) provided.

The goal is to find out how best to fine-tune configurations to minimize the false positives sometimes caused by IoC-based alerts triggered by a less-than-optimal configuration.

The wasted time spent on false positives is one reason CISOs, CIOs and their boards are evaluating secure, generative AI-based platforms. Several studies have shown how much time SOC analysts waste chasing down alerts that turn out to be false positives. Invicti found that SOCs spend 10,000 hours and $500,000 annually validating unreliable vulnerability alerts. An Enterprise Strategy Group (ESG) survey found that web applications and API security tools generate 53 daily alerts — with 45% being false positives.

One CISO running a pilot across several SOCs said the most significant result so far is how generative AI accessible through a ChatGPT interface drastically reduces the time wasted resolving false positives. 

8. More thorough, accurate and secure code analysis

Cybersecurity researchers continue to test and push ChatGPT to see how it handles more complex secure code analysis. Victor Sergeev published one of the more comprehensive tests. “ChatGPT successfully identified suspicious service installations, without false positives. It produced a valid hypothesis that the code is being used to disable logging or other security measures on a Windows system,” Segeev wrote.

As part of this test, Sergeev infected a target system with the Meterpreter and PowerShell Empire agents and emulated a few typical adversary procedures. Upon executing the scanner against the target system, it produced a scan report enriched with ChatGPT conclusions. It successfully identified two malicious running processes out of 137 benign processes concurrently running, without any false positives.

9. Improve SOC standardization and governance, contributing to a more robust security posture

CISOs say that just as crucial as improving visibility across diverse and often disparate tools at a technology level is improving standardization of SOC processes and procedures. Consistent workflows that can adapt to changes in the security landscape are critical to staying ahead of security incidents.

As the CISO of a company that produces microcomponents for the electronics industry put it, the goal is to “get our standardization act together and ensure no IP is ever compromised.”

10. Automate SIEM query writing and daily scripts used for SOC operations

Security information and event management (SIEM) queries are essential for analyzing real-time event log data from every available database and source to identify anomalies. They’re an ideal use case for generative AI and ChatGPT-based cybersecurity.

An SOC analyst with a major financial services firm told VentureBeat that SIEM queries could quickly grow to 30% of her job or more, and that automating their creation and updating would free up at least a day and a half a week.

ChatGPT’s potential to improve cybersecurity is just beginning

Expect to see more ChatGPT-based cybersecurity platforms launched in the second half of 2023, including one from Palo Alto Networks, whose CEO Nikesh Arora hinted on the company’s latest earnings call that the company sees “significant opportunity as we begin to embed generative AI into our products and workflows.” Arora added that the company intends to deploy a proprietary Palo Alto Networks security LLM in the coming year.

The second half of 2023 will see an exponential increase in new product launches aimed at streamlining SOCs and closing the identity-endpoint gap attackers continue exploiting.   

What’s most interesting about this area is how the new insights from telemetry data analyzed by generative AI platforms will provide innovative new product and service ideas. Endpoints and the data data they analyze are turbocharging innovations. Undoubtedly, the same will be true for generative AI platforms that rely on ChatGPT to make their insights available easily and quickly to security professionals. 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

[ad_2]

Source link

Leave a Comment

Your email address will not be published. Required fields are marked *