Looking Back Before Looking Forward
Every January, we make predictions. This year, before issuing our 2026 forecast, we graded our 2025 calls. The verdict: a solid B. Netskope's $908 million IPO at a $7.3 billion valuation validated our IPO-window thesis ahead of schedule. Personal cyber insurance grew to approximately $3 billion in premium volume, though mainstream consumer adoption remains stubbornly low at around 6% penetration despite 75% of consumers experiencing cybercrime. The crypto-cyber Gartner category we anticipated did not materialize on schedule—partial credit at best. Not a bad track record, but we are not here to celebrate mediocre outcomes. We are here to make five high-conviction calls for 2026 and stake our reputations on them.
What follows is DataInx Ventures' forward view on the themes we believe will define cybersecurity and enterprise AI markets over the next twelve months. These are informed by our deal flow, our portfolio company conversations, and our ongoing analysis of the structural forces reshaping the threat and investment landscape.
Prediction 1: Amazon Reframes the Enterprise AI Narrative
For two years, the AI conversation has been a Microsoft-OpenAI story. That framing is about to shift. Amazon has been playing a patient game—quietly deploying its own AI silicon in Trainium and Inferentia chips, scaling the world's largest enterprise cloud footprint, and investing $8 billion in Anthropic, the company that has emerged as the dominant force in AI-assisted software development. The framing that is crystallizing is straightforward: Microsoft and OpenAI own the consumer AI narrative; Amazon and Anthropic are positioned to own enterprise AI.
This maps cleanly to each company's institutional DNA. Microsoft has always competed on surface area—Windows, Office, and now Copilot embedded everywhere. Amazon has always competed on infrastructure leverage: build the primitives, enable others to build the businesses. AWS does not need consumer mindshare. It needs enterprise procurement relationships and developer obsession, both of which it has in abundance.
The reason Amazon has moved slowly is strategic. Pushing its own silicon too aggressively risked destabilizing a lucrative NVIDIA partnership that underpins much of AWS's AI infrastructure revenue. But when the AI narrative starts being written without you, the calculus changes. In 2026, expect Amazon to stop whispering about Bedrock, Anthropic, and the full AWS AI stack and start asserting its position loudly. The market is structurally underpricing Amazon's enterprise AI position. From a DataInx portfolio construction standpoint, this matters: companies building on AWS/Anthropic infrastructure may find a significantly more favorable distribution and partnership environment than they do today.
Prediction 2: The Vibe Coding Security Crisis Arrives
We have been watching the vibe coding phenomenon with deep concern since its emergence in 2024. AI-assisted code generation tools have democratized software creation in ways that are commercially significant and security catastrophic in equal measure. The problem is not that AI writes bad code—though it often does—but that a substantial portion of the people now shipping software through AI tools have never had security in their mental model. Security cannot be an afterthought when it was never a thought to begin with.
The force multiplier is structural. AI models are trained on the aggregate of all code ever written, which includes a vast corpus of student projects, copy-pasted Stack Overflow answers, and production code written with zero security consideration. AI produces statistically average code by construction, and it produces it at volumes previously impossible. MIT research suggests that 90% of code could be AI-generated by the end of 2026. The security implications of that statistic are not yet priced into enterprise risk frameworks.
2026 will bring a wave of basic vulnerabilities in customer-facing applications. Auth bypasses. Price manipulation. Data leakage. The kind of bugs we thought we left behind in 2004—now shipping at industrial scale.
The cleanup market is the investment opportunity. Organizations will discover dozens of vibe-coded internal tools and customer-facing applications with no meaningful security review. The companies building automated security review infrastructure specifically designed for AI-generated code—not the legacy SAST tools designed for human-authored codebases—are operating in a market that will be large, urgent, and chronically underserved for the next several years. This is an area where DataInx is actively seeking founding teams.
Prediction 3: The CISA Vacuum Gets Exploited
CISA—the Cybersecurity and Infrastructure Security Agency—has been functionally dismantled. Official reports suggest approximately one-third of the workforce has departed; credible insider accounts suggest the real figure may approach 70%. The security and geopolitical implications of this cannot be overstated, and they apply regardless of political perspective.
CISA was not primarily a regulatory body. It was the connective tissue between federal, state, local, and private-sector cyber defense. When a state utility, a regional hospital, and a telecom operator were simultaneously hit, CISA was often the entity capable of recognizing these as a single coordinated actor and coordinating the response. When a state IT team encountered anomalous activity, CISA was the trusted clearinghouse. That coordination capability is now largely absent.
State and local governments cannot fill this gap. They lack the resources, the visibility, and the mandate to defend against nation-state adversaries. Meanwhile, China is already pre-positioned: Volt Typhoon has been documented in critical infrastructure networks across multiple sectors; Salt Typhoon has penetrated telecommunications systems at scale. With a hollowed-out CISA, there is less detection capability, less pressure to remediate, and significantly more time for adversaries to deepen access before anyone notices. Russia and Iran, both engaged in active proxy conflicts with the United States, are reading the same public reports we are. From a VC investment perspective, this creates durable demand for private-sector coordination and threat intelligence infrastructure that fills the coordination vacuum. The market has a structural need that the public sector can no longer meet.
Prediction 4: AI Pentesting Hits the Wall
Many 2026 predictions will claim that AI revolutionizes offensive security and replaces the human red team. We are calling this wrong. The 80/20 problem is fundamental, not solvable with more compute. AI tools are genuinely excellent at automating reconnaissance and basic vulnerability scanning—the 80% of pentest work that was already partially commoditized. The creative exploitation that makes a pentest valuable—the hard 20% that involves understanding organizational context, chaining vulnerabilities through business logic, and identifying the specific paths to the crown jewels—remains resolutely out of reach.
Two structural blockers make this ceiling durable. First, the guardrails problem: foundational models are trained to refuse offensive security tasks. The better these models become at being safe, the worse they are at pentesting. You cannot build a competent AI penetration tester on a model that will not help you exploit anything. Second, the context problem: the value of a pentest is not the vulnerability list—it is knowing which finding leads to material risk and which is report padding. AI has no organizational context. It does not know what matters to this enterprise, what the crown jewels are, or how the business logic in that homegrown ERP translates to exploitable attack surface.
The result is predictable: AI pentesting tools will function as expensive vulnerability scanners. The buyers who invested expecting to reduce their red team headcount will be disappointed. We view this as an important signal for portfolio construction: the managed security services market for expert human security practitioners is more defensible than the current discourse suggests.
Prediction 5: Right-Sized AI Models Win
The era of reflexively deploying frontier models at every AI use case is ending. Not because frontier models are insufficient—they are extraordinary—but because the economics of indiscriminate use are becoming untenable as OpenAI and Anthropic reduce subsidization to reflect real infrastructure costs. The infrastructure for right-sizing already exists: GPT-4.1 in nano, mini, and standard tiers; Anthropic's Haiku, Sonnet, and Opus; Azure's Model Router that automatically routes across 18 models optimizing for cost, quality, or balance. What has been missing is discipline.
Using a frontier-scale multimodal reasoning model to fix grammar in an internal document is the enterprise AI equivalent of dispatching a heavy cargo plane to deliver a letter. The waste is obvious once you are paying real per-token prices. The practical implication for the security market specifically is significant: security use cases vary enormously in their model requirements. Automated log triage, alert classification, and routine pattern matching require fast, cheap, high-volume inference. Threat intelligence synthesis, novel malware analysis, and security research require frontier reasoning capabilities. Companies building security AI infrastructure that routes intelligently across model tiers will structurally outcompete those building on single-model architectures. This is a thesis we are actively exploring at the infrastructure layer of our portfolio.
The Investment Implication
Each of these predictions has a direct investment corollary. The Amazon/Anthropic enterprise AI narrative shift creates tailwinds for companies building on AWS-native AI infrastructure. The vibe coding security crisis creates an urgent market for automated security review tooling designed specifically for AI-generated code. The CISA vacuum creates durable demand for private-sector threat coordination and intelligence infrastructure. The AI pentesting ceiling preserves the premium for managed human security services. And the right-sizing imperative creates opportunity in inference routing and optimization infrastructure.
DataInx Ventures is deploying Seed capital across each of these areas in 2026. The cybersecurity and enterprise AI markets are not becoming less interesting as they mature—they are generating new category opportunities faster than the investment community is identifying them. The founders building at these intersections today are the ones whose outcomes will define the narrative of the decade ahead.