A step forward that could influence how smart contracts are designed and verified.
Updated
January 8, 2026 6:32 PM

ChainGPT's robot mascot. IMAGE: CHAINGPT
A new collaboration between ChainGPT, an AI company specialising in blockchain development tools and Secret Network, a privacy-focused blockchain platform, is redefining how developers can safely build smart contracts with artificial intelligence. Together, they’ve achieved a major industry first: an AI model trained exclusively to write and audit Solidity code is now running inside a Trusted Execution Environment (TEE). For the blockchain ecosystem, this marks a turning point in how AI, privacy and on-chain development can work together.
For years, smart-contract developers have faced a trade-off. AI assistants could speed up coding and security reviews, but only if developers uploaded their most sensitive source code to external servers. That meant exposing intellectual property, confidential logic and even potential vulnerabilities. In an industry where trust is everything, this risk held many teams back from using AI at all.
ChainGPT’s Solidity-LLM aims to solve that problem. It is a specialised large language model trained on over 650,000 curated Solidity contracts, giving it a deep understanding of how real smart contracts are structured, optimised and secured. And now, by running inside SecretVM, the Confidential Virtual Machine that powers Secret Network’s encrypted compute layer, the model can assist developers without ever revealing their code to outside parties.
“Confidential computing is no longer an abstract concept,” said Luke Bowman, COO of the Secret Network Foundation. “We've shown that you can run a complex AI model, purpose-built for Solidity, inside a fully encrypted environment and that every inference can be verified on-chain. This is a real milestone for both privacy and decentralised infrastructure”.
SecretVM makes this workflow possible by using hardware-backed encryption to protect all data while computations take place. Developers don’t interact with the underlying hardware or cryptography. Instead, they simply work inside a private, sealed environment where their code stays invisible to everyone except them—even node operators. For the first time, developers can generate, test and analyse smart contracts with AI while keeping every detail confidential.
This shift opens new possibilities for the broader blockchain community. Developers gain a private coding partner that can streamline contract logic or catch vulnerabilities without risking leaks. Auditors can rely on AI-assisted analysis while keeping sensitive audit material protected. Enterprises working in finance, healthcare or governance finally have a path to adopt AI-driven blockchain automation without raising compliance concerns. Even decentralised organisations can run smart-contract agents that make decisions privately, without exposing internal logic on a public chain.
The system also supports secure model training and fine-tuning on encrypted datasets. This enables collaborative AI development without forcing anyone to share raw data—a meaningful step toward decentralised and privacy-preserving AI at scale.
By combining specialised AI with confidential computing, ChainGPT and Secret Network are shifting the trust model of on-chain development. Instead of relying on centralised cloud AI services, developers now have a verifiable, encrypted environment where they keep full control of their code, their data and their workflow. It’s a practical solution to one of blockchain’s biggest challenges: using powerful AI tools without sacrificing privacy.
As the technology evolves, the roadmap includes confidential model fine-tuning, multi-agent AI systems and cross-chain use cases. But the core advancement is already clear: developers now have a way to use AI for smart contract development that is fast, private and verifiable—without compromising the security standards that decentralised systems rely on.
Keep Reading
How Korea is trying to take control of its AI future.
Updated
January 13, 2026 10:56 AM

SK Telecom Headquarters in Seoul, South Korea. PHOTO: ADOBE STOCK
SK Telecom, South Korea’s largest mobile operator, has unveiled A.X K1, a hyperscale artificial intelligence model with 519 billion parameters. The model sits at the center of a government-backed effort to build advanced AI systems and domestic AI infrastructure within Korea. This comes at a time when companies in the United States and China largely dominate the development of the most powerful large language models.
Rather than framing A.X K1 as just another large language model, SK Telecom is positioning it as part of a broader push to build sovereign AI capacity from the ground up. The model is being developed as part of the Korean government’s Sovereign AI Foundation Model project, which aims to ensure that core AI systems are built, trained and operated within the country. In simple terms, the initiative focuses on reducing reliance on foreign AI platforms and cloud-based AI infrastructure, while giving Korea more control over how artificial intelligence is developed and deployed at scale.
One of the gaps this approach is trying to address is how AI knowledge flows across a national ecosystem. Today, the most powerful AI foundation models are often closed, expensive and concentrated within a small number of global technology companies. A.X K1 is designed to function as a “teacher model,” meaning it can transfer its capabilities to smaller, more specialized AI systems. This allows developers, enterprises and public institutions to build tailored AI tools without starting from scratch or depending entirely on overseas AI providers.
That distinction matters because most real-world applications of artificial intelligence do not require massive models operating independently. They require focused, reliable AI systems designed for specific use cases such as customer service, enterprise search, manufacturing automation or mobility. By anchoring those systems to a large, domestically developed foundation model, SK Telecom and its partners are aiming to create a more resilient and self-sustaining AI ecosystem.
The effort also reflects a shift in how AI is being positioned for everyday use. SK Telecom plans to connect A.X K1 to services that already reach millions of users, including its AI assistant platform A., which operates across phone calls, messaging, web services and mobile applications. The broader goal is to make advanced AI feel less like a distant research asset and more like an embedded digital infrastructure that supports daily interactions.
This approach extends beyond consumer-facing services. Members of the SKT consortium are testing how the hyperscale AI model can support industrial and enterprise applications, including manufacturing systems, game development, robotics and autonomous technologies. The underlying logic is that national competitiveness in artificial intelligence now depends not only on model performance, but on whether those models can be deployed, adapted and validated in real-world environments.
There is also a hardware dimension to the project. Operating an AI model at the 500-billion-parameter scale places heavy demands on computing infrastructure, particularly memory performance and communication between processors. A.X K1 is being used to test and validate Korea’s semiconductor and AI chip capabilities under real workloads, linking large-scale AI software development directly to domestic semiconductor innovation.
The initiative brings together technology companies, universities and research institutions, including Krafton, KAIST and Seoul National University. Each contributes specialized expertise ranging from data validation and multimodal AI research to system scalability. More than 20 institutions have already expressed interest in testing and deploying the model, reinforcing the idea that A.X K1 is being treated as shared national AI infrastructure rather than a closed commercial product.
Looking ahead, SK Telecom plans to release A.X K1 as open-source AI software, alongside APIs and portions of the training data. If fully implemented, the move could lower barriers for developers, startups and researchers across Korea’s AI ecosystem, enabling them to build on top of a large-scale foundation model without incurring the cost and complexity of developing one independently.