Artificial Intelligence

From Security Scores to Dollar Risk: Quantara AI Pushes Continuous Cyber Risk Modeling

Quantara AI launches a continuous platform designed to estimate the financial impact of cyber risk as companies move beyond periodic assessments

Updated

February 20, 2026 6:43 PM

A person tightrope walking between two cliffs. PHOTO: UNSPLASH

Cyber risk is increasingly treated as a financial issue. Boards want to know how much a cyber incident could cost the company, how it could affect earnings, and whether current security spending is justified.

Yet many organizations still measure cyber risk through periodic reviews. These assessments are often conducted once or twice a year, supported by consultants and spreadsheet models. By the time the report reaches senior leadership, the company’s systems may have changed and new threats may have emerged. The way risk is measured does not always match how quickly it evolves.

This gap is where Quantara AI is positioning its new platform. Quantara AI, a Boise-based cybersecurity startup, has introduced what it describes as the industry’s first persistent AI-powered cyber risk solution. The system is designed to run continuously rather than rely on occasional assessments.

The company’s core argument is straightforward: not every security weakness carries the same financial consequence. Instead of ranking issues only by technical severity, the platform analyzes active threats, identifies which company systems are exposed, and estimates how much money a successful attack could cost. It uses statistical models, including Value at Risk (VaR), to calculate potential losses. It also estimates how specific security improvements could reduce that projected loss.

The timing aligns with a broader market shift. International Data Corporation (IDC) projects that by 2028, 40% of enterprises will adopt AI-based cyber risk quantification platforms. These tools convert security data into financial estimates that can guide budgeting and investment decisions. The forecast reflects growing pressure on security leaders to present risk in terms that boards and regulators understand.

Traditional compliance and risk management systems often focus on meeting regulatory standards. Vulnerability management programs typically score weaknesses based on technical characteristics. Consultant-led risk studies provide detailed analysis, but they are usually performed at set intervals. In fast-changing threat environments, that model can leave decision-makers working with outdated information.

Quantara’s platform attempts to replace that periodic process with continuous measurement. It brings together threat data, internal system information and financial modeling in one system. The goal is to show, at any given time, which specific weaknesses could lead to the largest financial losses.

Cyber risk quantification as a concept is not new. What is changing is the expectation that these calculations be updated regularly and tied directly to financial decision-making. As cyber incidents carry clearer monetary consequences, companies are looking for ways to measure exposure with greater precision.

The broader question is whether enterprises will shift fully toward continuous, AI-driven risk analysis or continue relying on periodic external assessments. What is clear is that cybersecurity discussions are moving closer to financial reporting — and tools that estimate potential loss in dollar terms are becoming central to that shift.

Keep Reading

Artificial Intelligence

How ChainGPT and Secret Network Bring Private, Verifiable AI Coding On-Chain

A step forward that could influence how smart contracts are designed and verified.

Updated

January 8, 2026 6:32 PM

ChainGPT's robot mascot. IMAGE: CHAINGPT

A new collaboration between ChainGPT, an AI company specialising in blockchain development tools and Secret Network, a privacy-focused blockchain platform, is redefining how developers can safely build smart contracts with artificial intelligence. Together, they’ve achieved a major industry first: an AI model trained exclusively to write and audit Solidity code is now running inside a Trusted Execution Environment (TEE). For the blockchain ecosystem, this marks a turning point in how AI, privacy and on-chain development can work together.

For years, smart-contract developers have faced a trade-off. AI assistants could speed up coding and security reviews, but only if developers uploaded their most sensitive source code to external servers. That meant exposing intellectual property, confidential logic and even potential vulnerabilities. In an industry where trust is everything, this risk held many teams back from using AI at all.

ChainGPT’s Solidity-LLM aims to solve that problem. It is a specialised large language model trained on over 650,000 curated Solidity contracts, giving it a deep understanding of how real smart contracts are structured, optimised and secured. And now, by running inside SecretVM, the Confidential Virtual Machine that powers Secret Network’s encrypted compute layer, the model can assist developers without ever revealing their code to outside parties.

“Confidential computing is no longer an abstract concept,” said Luke Bowman, COO of the Secret Network Foundation. “We've shown that you can run a complex AI model, purpose-built for Solidity, inside a fully encrypted environment and that every inference can be verified on-chain. This is a real milestone for both privacy and decentralised infrastructure”.

SecretVM makes this workflow possible by using hardware-backed encryption to protect all data while computations take place. Developers don’t interact with the underlying hardware or cryptography. Instead, they simply work inside a private, sealed environment where their code stays invisible to everyone except them—even node operators. For the first time, developers can generate, test and analyse smart contracts with AI while keeping every detail confidential.

This shift opens new possibilities for the broader blockchain community. Developers gain a private coding partner that can streamline contract logic or catch vulnerabilities without risking leaks. Auditors can rely on AI-assisted analysis while keeping sensitive audit material protected. Enterprises working in finance, healthcare or governance finally have a path to adopt AI-driven blockchain automation without raising compliance concerns. Even decentralised organisations can run smart-contract agents that make decisions privately, without exposing internal logic on a public chain.

The system also supports secure model training and fine-tuning on encrypted datasets. This enables collaborative AI development without forcing anyone to share raw data—a meaningful step toward decentralised and privacy-preserving AI at scale.

By combining specialised AI with confidential computing, ChainGPT and Secret Network are shifting the trust model of on-chain development. Instead of relying on centralised cloud AI services, developers now have a verifiable, encrypted environment where they keep full control of their code, their data and their workflow. It’s a practical solution to one of blockchain’s biggest challenges: using powerful AI tools without sacrificing privacy.

As the technology evolves, the roadmap includes confidential model fine-tuning, multi-agent AI systems and cross-chain use cases. But the core advancement is already clear: developers now have a way to use AI for smart contract development that is fast, private and verifiable—without compromising the security standards that decentralised systems rely on.