AI in Cybersecurity: Opportunities and Challenges for CISOs
Gutsy Staff | April 18, 2024
Enhanced threat detection.
Automated response mechanisms.
Streamlined security operations.
These are just a few of the promises Artificial Intelligence (AI) may fulfill for security leaders as the use of the technology spreads.
Gutsy CTO John Morello recently discussed the potential solutions and challenges AI may introduce for security leaders with cybersecurity expert Bruce Schneier.
Opportunities AI may provide to CISOs:
- Enhanced Detection Capabilities: Analyze vast amounts of data in real-time, enabling early detection of potential security threats.
- Automated Response: Autonomously respond to cyber threats, mitigating risks and minimizing response times.
- Improved Risk Assessment: AI algorithms can assess the likelihood and potential impact of various security incidents, aiding in proactive risk management.
- Streamlined Security Operations: Allowing security teams to focus on strategic planning and high-level decision-making.
- Adaptive Defense Mechanisms: Adapt to evolving threats by continuously learning from past incidents and adjusting defense strategies accordingly.
Challenges AI may present for CISOs:
- Bias and Discrimination: Models may inherit biases from training data, leading to discriminatory outcomes and exacerbating existing social inequalities.
- Corporate Control and Privacy Concerns: Corporate ownership raises concerns about data privacy, surveillance, and potential misuse of AI-powered technologies. > We’ve discussed this previously
- Monopolization and Lack of Diversity: The dominance of large corporations may stifle innovation and diversity outside of these large organizations.
- Empowerment of Malicious Actors: Both defenders and attackers will have their capabilities amplified, potentially increasing the scale and sophistication of cyber attacks.
- Ethical and Regulatory Challenges: The technology’s rapid advancement outpaces the development of ethical guidelines, regulatory frameworks, and compliance requirements.
Related Resources:
1) [Whitepaper] Navigating a New Security Governance Reality: A CISO's Guide to Cybersecurity Disclosure & Compliance
2) [Article] When Investing in Security Processes is a Solid Governance Strategy
3) [Article/Video] Redefining Security Governance with Process Mining
Meet Bruce at RSAC 2024
Continue this AI conversation at our "Meet the Author" book signing event at RSAC featuring Bruce Schneier.
Supplies are limited. <RSVP is highly recommended.
He will be the Gutsy booth (360 in the south hall) Tuesday, May 7 from 3:30 - 4:30 pm PST to sign free copies of his best-seller, A Hacker’s Mind
WHAT: Author book signing at RSAC featuring Bruce Schneier
WHEN: Tuesday, May 7, 2024
3:30 - 4:30 pm PST
WHERE: Gutsy's Booth 360
Moscone Center South Expo Hall (San Francisco, CA)
Full interview transcript:
John
in the most recent cryptogram, as as an example, you talked about the how A.I. enables mass surveillance in a more effective manner than ever had been possible before. Bruce
So on risks of AI, I think a lot about near-term risks today. I, I think about the misogyny, the racism, the hate that comes out of the training data that the AI mirrors. We are not good at all at recognizing that and dealing with it in any way. I worry a lot about corporate control of AI that we expect these AIS to be our assistants.
Bruce
Our confidence is going to be very intimate with them in a way that's even more so than with our search engine or our browser or even our phone. And the fact that they will be double agents, that they will be owned by and run by for profit corporations.
It's doing things for you, but it's really working against you. It's really spying on you. That that that's its goal. I do. I worry about that. I worry about monopolization, that there's not going to be enough competition in these AI models. I worry that they're all being designed by corporations. I think corporate AI is a certain type of AI, and that if we had a public AI designed not under the profit motive, you get a different flavor of AI, which we'll have, which will have benefits.
And lastly, I worry about the empowerment that I will empower people, and that's mostly good. But in the powers bad people as well, and I don't know if we're ready for that kind of empowerment. And I guess if I add one more AI is going to change a lot of our dynamics of what is a signal. So let me give you sort of one example.
If I wanted to sue you, I would hire an attorney and we would file a lawsuit. I'm going to make this up. That's going to cost $50,000 to do that. And when you get receipt of that lawsuit, there's a strong social signal. Like, I am so mad at you that I've spent $50,000 just to get this started. And I'm going to see this through what I was.
If it cost me $5,000, $500 to push a button and I write the complaint and file it, that social signal changes and we're probably getting ten or 100 times more lawsuits in the world. So that's going to change dramatically. The dynamics of this arbitration process. So this adjudication process in ways we don't understand, I think there are big risks there that's very similar to the risk of misinformation.
Right? I produced articles, I produce things. It changes the cost of doing the thing. But we've built society around these old human costs, not these new machine assisted costs.
John
You know, it's something that could that could perform hacking or espionage really at a at a unprecedented kind of scale for a lot of those kinds of same reasons that you just identify.
Bruce
I mean, A.I. is always about machines doing what used to be the purview of humans. You think about AI hacking, it's AI's being hackers, right? I mean, what do hackers do that go into a computer? They type bunch of things. They'll do reconnaissance. They'll try to break into your network, they'll move around. You know, over the years, computers will be taking over some of those pieces.
It's not going to be push a button and you hack a network. It's going to be some human computer collaboration. But as computers take more of those tasks, less and less skilled humans might be ness, might be necessary. It might be that it'll make skilled humans more powerful by a lot and then, you know, less skilled humans powerful to some level.
I mean, we really don't know how this works. So we've seen examples going both ways in different aspects of A.I., human collaboration, whether it makes the low end good or it doesn't really help, low end makes the high end much better. This means we might get more hacking, we'll get cheaper hacking. And now defense has to respond in some way.
And we saw this 2016 DARPA had an A.I. capture the flag contest. Finals were a DEFCON, and you capture the flag. It's, you know, the great hacker game where you're in a simulated and you defend your piece, you attack other pieces. Well, I did the same thing with computers. This was 2016. This is Progenitor II. This is old school A.I. and it was super interesting.
DARPA never repeated their experiment. China is doing this every year. It's called robot hacking games. So they are working on these A.I., both attackers and defenders. But I think it's going to be very interesting to see our field in the next bunch of years as these AI's come online on both sides, the attack and the defense.
John
Do you foresee maybe even a future in which the security organization within the enterprise is really people that are more instructing AIS on how to perform that defense and less about doing the, you know, the analysis and the incident response and patching and so forth themselves.
Bruce
You need A.I. processes, you need a push a button and a patch gets installed in a thousand computers. I mean, you want the computers doing the things they need done quickly, but the more they can do the analysis, the more they can do the presenting options.
I want to see humans doing the strategic thinking that the machines can't and the tactical stuff that's just, you know, do we go left or right? Well, depends on where the attack is coming from. Let the machines do that.