Technology

GitLab’s new security feature uses AI to explain vulnerabilities to developers

Developer platform GitLab today announced a new AI-powered security feature that uses an extensive language model to explain potential vulnerabilities to developers, with plans to expand this to automatically resolve these vulnerabilities using AI in the future.

Earlier this month, the company announced a new experimental tool that explains code to a developer, similar to the new security feature GitLab announced, and a new experimental feature that automatically summarizes feedback for issues. In this context, it’s also worth noting that GitLab already released a code completion tool, which is now available to GitLab Ultimate and Premium users, and its ML-based suggested reviewers feature last year.

Image Credits: GitLab

The new “explain this vulnerability” feature will attempt to help teams find the best way to fix a vulnerability within the context of the codebase. It is this context that makes the difference here, as the tool can combine basic information about the vulnerability with information specific to user code. This should make it easier and faster to remedy these issues.

The company calls its overall philosophy behind adding AI features “speed with rails,” meaning the combination of AI code and test generation backed by the company’s full-stack DevSecOps platform to ensure that everything it builds AI can be safely implemented.

GitLab also emphasized that all of its AI features are designed with privacy in mind. “Yeah us are touching andour intellectual property, which is codewmy are only going to be sending that to to model that is GitLabs either is inside the GitLab cloud architecture,” GitLab CPO David DeSanto told me. “Youhe why that’s important to us, and this goes back to company DevSecOps – is that our customers are strongly regulated. Our customers are generally very security and compliance aware, and us knew us could No build to code suggestions solution that required us sending he to to third-party AI.” He also noted that GitLab will not use its customers’ private data to train its models.

DeSanto emphasized that GitLab’s overall goal for its AI initiative is 10x efficiency, and not just individual developer efficiency but the overall development lifecycle. As he rightly pointed out, even if you could increase a developer’s productivity 100-fold, subsequent inefficiencies in reviewing that code and putting it into production could easily negate it.

Yeah development is twenty% of he life cycle, even if we do that fifty% further cash, you are No In fact going to feel that,” DeSanto said.Now, Yeah us do he security equipment, he operations equipment, he compliance teams also further efficient, so as a organization, are going to see he.”

The “explain this code” feature, for example, has proven to be quite useful not only for developers, but also for QA and security teams, who now have a better understanding of what to test. That, surely, was also the reason why GitLab extended it to also explain vulnerabilities. In the long term, the idea here is to build features to help these teams automatically generate unit tests and security patches, which would then be integrated into the overall GitLab platform.

According to GitLab’s recent DevSecOps report, 65% of developers already use AI and ML in their testing efforts or plan to do so in the next three years. 36% of teams already use an AI/ML tool to check their code before code reviewers see it.

“Given the resource constraints DevSecOps teams face, automation and AI become a strategic resource,” writes GitLab’s Dave Steer in today’s announcement. “Our DevSecOps platform helps teams fill critical gaps while automatically enforcing policies, enforcing compliance frameworks, performing security testing using GitLab automation capabilities, and providing AI-assisted recommendations, freeing up resources.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button