Securing Open Source Software
The Role of AI in Enhancing Cybersecurity
Highlights
- Open-source software powers critical infrastructure across sectors—from healthcare to telecommunications to transportation—making its security a top priority.
- Security vulnerabilities in open-source projects can have far-reaching consequences. As these projects become integral to critical infrastructure, securing them is paramount.
- Artificial intelligence (AI) offers significant potential to enhance open-source security, but challenges remain in training, monitoring, and regulating AI-driven tools.
The Open Source Security Foundation (OpenSSF) was founded in 2020, and most relevant industry parties are now contributing members, including RedHat and Cisco. Although OpenSSF has already done a lot to promote, stimulate, and guide open-source software security, we believe that more can and should be done, particularly with respect to AI-enabled code review. In this article, we highlight the challenges and opportunities that come with this novel technology.
The Problem
Open-source software is embedded in systems that power daily life - from cloud services to autonomous vehicles. While its transparency fosters innovation, the widespread use of open-source software means that vulnerabilities affect many industries, often spreading through complex software supply chains.
As open-source software becomes essential to modern infrastructure, its security becomes a societal issue. Securing these systems during this era of rapid AI innovation isn’t just a concern for developers; but a collective responsibility to ensure we fully benefit from AI innovations.
How do we harness this open source AI innovation, and balance it with our goals for security and safety?
Challenges and Complications
Data Quality: AI offers a promising solution for securing open-source software by automating vulnerability detection. However, AI can only be effective if it’s trained on high-quality, secure data. If trained on flawed or incomplete code, AI could miss vulnerabilities or even introduce new security risks.
False Positives and Negatives: Generating large numbers of false positives (where AI flags secure code as insecure) and false negatives (where AI misses vulnerabilities) is a significant risk of such new technology. Given the complexity of cybersecurity threats, it will take years of intense research and development before code-review by AI can be fully relied upon.
Regulation and Standards: Open-source software is used in sectors like healthcare and finance, which require strict security standards. Without proper oversight, AI-driven security tools could miss crucial vulnerabilities, leaving critical systems exposed. A regulatory framework is necessary to ensure open-source software meets security standards, also when reviewed by AI.
Case Study: CodeGate
CodeGate secures AI-generated code, preventing common vulnerabilities introduced by tools like GitHub Copilot. AI coding assistant helps developers write code but can inadvertently introduce security flaws by recommending outdated libraries or exposing sensitive data. CodeGate acts as a security layer, ensuring that AI-generated code adheres to secure coding practices and preventing the use of known vulnerabilities or deprecated libraries.
Although strictly speaking, CodeGate does not use AI to secure AI-generated code, it seems only a small step to improve the effectiveness of the tool with AI.
Options for Securing Open-Source Software
Practical applications of AI in securing open-source projects will come with their own security risks. The following steps we can take to enhance overall security, in general as well as for applying AI-enabled code review specifically.
AI-Driven Code Auditing: AI can automate the code review process, but its effectiveness depends on being trained on verified, secure data to prevent introducing new vulnerabilities. This includes verified secure code as well as verified insecure code.
Human Oversight: A "human in the loop" approach ensures that AI findings are reviewed by cybersecurity experts. Humans can spot complex vulnerabilities AI might miss and ensure the accuracy of AI-driven assessments before code is deployed.
Communication of Changes and Vulnerabilities: Open-source communities must quickly communicate changes and vulnerabilities. Adopting the Software Bill of Materials (SBOM) helps track components and vulnerabilities, ensuring timely updates and coordinated security actions. With an SBOM, organisations can see all the components in their open-source software, making it easier to respond to new threats. In addition to managing vulnerabilities, SBOMs provide valuable insights into outdated technology, supply chain security, and the integration of IT with operational technology (OT).
Call for Action
As the role of open-source software grows, so must the efforts to secure it. To protect the systems we rely on every day, we must:
- Train AI on Verified Code
- Ensure Human Oversight
- Promote SBOM Adoption
This requires additional research, standardisation, and policy making efforts, and we urge the industry, government, and academia to come together and create targeted contributions towards the relevant bodies.
Conclusion
The future of open-source software - and the security of the technologies we depend on daily - hinges on our ability to secure it. This requires a unified approach, combining AI’s power, human expertise, regulatory oversight, and initiatives like SBOM to safeguard our digital infrastructure.

About the authors
Prof. Frank den Hartog: Cisco Research Chair - Critical Infrastructure, University of Canberra. A long track record in industry as well as academia with a focus on the cyber security of critical infrastructures such as telecommunication networks and Industry IoT.
A/Prof. Carlos Kuhn: A distinguished physicist with a background in both theoretical and experimental physics. Currently, he is the Research Chair in Open-Source Technology, focusing on computational sciences, AI, and quantum computation.
Eric Nguyen: He is an Industry Advisor (Cyber x AI) at the Open-Source Institute, Australia. With a background in management consulting, he specializes in operationalising emerging technologies to enhance processes and engage people within large organizations.
About the ACS Canberra Hub
The Hub is a custom-built collaborative space for members to drop by and use as a hotspot for meetings and events. Meeting room hire is also available to members and non-members.
Click here to find out more about the room hire rate or click here to book a room for your next meeting/event.
Find out more about what ACS can do for you, your membership benefits, and what upcoming events we have planned for you.