Transparency and verifiability are the core tenets of open-source technology, which promotes trust and collaboration. The open-source model has come under greater scrutiny as its impact on security and IP protection becomes more obvious. From decentralized finance (DeFi) protocols to artificial intelligence (AI) models, the debate over open versus closed source is intensifying, with experts weighing the benefits of innovation against the risks of exploitation.

Open Source in DeFi Fuels Growth and Security

Open-source protocols have established a strong foundation for the competitive DeFi ecosystem we see today, fostering rapid innovation and expansion. As of April 29, open-source protocols accounted for almost 90% of the value locked in Solana’s DeFi ecosystem. This graphic is meant to illustrate just how much we all depend on code that is transparent and verifiable.

Max Kaplan, founder of Sol Strategies, putting open-source practices into action. He describes how audits and bounty programs increase security by focusing additional scrutiny on the code and rewarding responsible behavior. This process provides an ongoing opportunity to improve and detect weaknesses that would be missed otherwise.

Nonetheless, Solana’s Loopscale protocol exploit shows that open-source is not a fail-safe for keeping out bad actors. This incident brings to light the need for comprehensive and robust security measures. It further shines a light on the need, regardless of code openness, for proactive risk management.

AI's Open-Source Dilemma

The rise of open-source AI models, exemplified by China’s DeepSeek, has sparked a global debate about the balance between democratization and potential misuse. Released in early 2025, DeepSeek’s powerful and low-cost model shook the market, showcasing the potential of open-source AI to challenge established players.

Matt Pearl, director of the strategic tech program at the Center for Strategic and International Studies, cautions that open-source AI can be dangerous without adequate safety guardrails. Pearl and his co-authors argue that the unrestricted access to these models allows anyone to download, modify, and strip out safeguards, potentially leading to malicious applications.

DeepSeek’s vulnerability to jailbreaking invites the creation of malware, phishing kits, or disinformation. This case emphasizes the potentially grave hazards of unregulated, open-source AI. While this capability has clearly positive applications, it presents great risks of abuse — especially from malicious actors. We need to adopt careful development and deployment practices to mitigate these risks.

Intellectual Property and Licensing Models

The open-source vs closed-source debate goes much further than security, including the protection of intellectual property. He argues that projects can better protect their IP if they deploy the right licensing and establish proper governance procedures.

Uniswap v3’s business license model exemplifies how projects can balance open-source principles with the need to protect their unique innovations. This practice encourages a collective, iterative design process. It prevents commercialization, allowing artists to continue having the economic benefit of what they create.

The worries over IP complicate discussions about motives for adopting open-source versus closed-source models. One common argument for closing off smart contract code is that regular users do not read it, while malicious actors do. Jordan paints closed-source DeFi protocols and wallets in particular as one of the network’s largest vulnerabilities.