Podcast Detail

SANS Stormcast Monday, February 9th, 2026: Azure Vulnerabilties; AI Vulnerability Discovery; GitLab AI Gateway Vuln

If you are not able to play the podcast using the player below: Use this direct link to the audio file: https://traffic.libsyn.com/securitypodcast/9800.mp3

Podcast Logo
Azure Vulnerabilties; AI Vulnerability Discovery; GitLab AI Gateway Vuln
00:00

Microsoft Patches Four Azure Vulnerabilities (three critical)
https://msrc.microsoft.com/update-guide/vulnerability

Evaluating and mitigating the growing risk of LLM-discovered 0-days
https://red.anthropic.com/2026/zero-days/

Gitlab AI Gateway Vulnerability CVE-2026-1868
https://about.gitlab.com/releases/2026/02/06/patch-release-gitlab-ai-gateway-18-8-1-released/

Podcast Transcript

 Hello and welcome to the Monday, February 9th, 2026
 edition of the SANS and the Storm Center's Stormcast. My
 name is Johannes Ullrich, recording today from
 Jacksonville, Florida. And this episode is brought to you
 by the SANS.edu Undergraduate Certificate Program in Applied
 Cybersecurity. Let's start today with a couple of
 vulnerabilities in Microsoft's Azure Cloud. There were
 actually four vulnerabilities that were being addressed late
 last week and three of them are critical. So I do want to
 point them out here specifically. One is Azure
 Front Door. There is elevation of privilege vulnerability.
 Elevation of privilege vulnerability in cloud
 solution, of course, always means cross-tenant
 possibilities here. And yes, Front Door, that's Azure's CDN
 service. The next one we have is in Azure Functions. Azure
 Functions is the serverless feature in Azure Cloud. So
 again, here an information disclosure vulnerability. And
 then a second approach escalation vulnerability. This
 one is in Azure Arc. And that's sort of the cross-cloud
 admin platform. I think that's how it's best described that
 Microsoft offers to its customers. Good news, however,
 as typical for these kind of cloud software issues, you as
 a customer don't really have to do anything. Microsoft
 already mitigated these vulnerabilities. If you do run
 into any kind of... And one of the heavily debated topics
 these days is the usefulness of AI tools to find
 vulnerabilities. In particular, the open source
 space, there were a couple authors who sort of went on
 the record, basically complaining about all of the
 AI slob they're being bombarded with. Claud or
 Anthropic really has now released a paper looking into
 their latest model, Opus 4.6, and trying to figure out how
 useful it is in order to find vulnerabilities. Well, they,
 of course, well, work for Anthropic find it to be very
 useful. They did discover 500 different vulnerabilities that
 they describe as high impact and they reported them to the
 respective open source projects. They did build some
 safeguards in there to not just flood these open source
 projects with AI slob. They put some human, I assume,
 validation into the results. So hopefully these will be
 useful results. And as part of the paper, they're pointing
 out in particular GhostScript and OpenSC. GhostScript, the PDF
 postscript system or library that, of course, now has had
 quite a few vulnerabilities in the past. And OpenSC is used
 to read an interface with smart cards. Also, library,
 it's not, well, stranger to vulnerabilities. And of
 course, it has to deal with a lot of these difficult parsing
 issues like ASN.1 and such, which continue to be
 problematic. So certainly these are some worthwhile
 projects to look at for more vulnerabilities. But they also
 point out here that both of these projects had had a
 number of security reviews in the past and like I said,
 they're very high profile and known to have contained
 vulnerabilities in the past. But still, all the prior
 efforts, particularly fuzzing, did overlook a lot of issues
 that their AI model was able to identify. I think the big
 lesson learned here is that in order to get good results from
 these AI models, you also need somewhat skilled operators
 actually using them. A lot of the AI slop that the open
 source projects are complaining about, I think are
 not so much necessarily the AI tools themselves, but just
 sort of their unskilled and indiscriminatory usage of
 these AI tools that really gave them a little bit of bad
 rep. But of course, at this point, AI is also causing some
 vulnerabilities or better the adoption of AI tooling. The
 latest example is a vulnerability that GitLab
 addressed last week. It affects the GitLab AI gateway.
 This could lead to code execution, but only for
 authenticated users. So if you are doing an on-premise
 install of GitLab's AI gateway, make sure that you're
 up to date. Well, and this is it for today. So thanks again
 for listening. Thanks for liking. Thanks for subscribing
 this podcast. Remember, I have classes coming up end of
 March, early April in Orlando, and then end of April in
 Amsterdam. And just check the night storms on our website
 and at the bottom below the show notes, you should see
 links to upcoming classes. Thanks and talk to you again
 tomorrow. Bye.