The Question That Shapes Every Investment in This Space
When we evaluate investment opportunities at the intersection of artificial intelligence and cybersecurity, the most important framing question is not "how big is the market" or "what is the competitive landscape." It is: what is the nature of AI's relationship to security? Is AI fundamentally a weapon that reshapes the offensive-defensive arms race? Or is it more accurately understood as new terrain—an unmapped territory with its own hazards, rules, and opportunities that require a fundamentally different mode of navigation than the tools we have built to date?
The answer, after considerable analysis, is that both framings are correct simultaneously, and that the most sophisticated investors and founders are those who can hold both in mind and build accordingly. The companies that will define the AI-security landscape of the next decade are those whose architectures reflect both the arms-race dynamics that AI accelerates and the exploratory, adaptive posture that AI's fundamental novelty demands.
Lens One: AI as the Airplane — The Arms Race Escalation
The first lens for understanding AI in security is historical. Airplanes transformed warfare between World War I and World War II in ways that were not obvious at the outset. Aircraft went from reconnaissance platforms to fighters to strategic bombers within two decades, bypassing trench warfare entirely, projecting force deep into enemy territory, and changing the logic of military strategy from territorial defense to industrial capacity. The defenders adapted: anti-aircraft artillery, fighter doctrine, radar networks. The arms race escalated in both capability and complexity.
AI is playing an analogous role in cyber conflict. Large language models are the strategic bombers of this new era—not because of their raw destructive capacity, but because of their ability to industrialize attacks that previously required significant human expertise per operation. Sophisticated phishing campaigns that once required skilled social engineers to craft convincingly personalized messages can now be generated at industrial scale by any operator with API access. Malware variants can be generated faster than signature databases can be updated. The State Department has already documented AI-generated impersonation of senior officials—including an attempt to impersonate the Secretary of State to contact foreign governments.
The defensive adaptations are following the same escalation logic. AI-enabled behavioral detection systems are replacing signature-based tools. Automated threat hunting platforms are extending analyst capacity. The Cellebrite acquisition of Corellium for $170 million reflects the intensifying demand for sophisticated mobile vulnerability research infrastructure as mobile attack surfaces become more central to enterprise security. The arms race has leveled up: faster, stranger, and with significantly lower barriers to entry on the offensive side.
LLMs are the strategic bombers of the new cyber conflict era—not for their destructive capacity per se, but for their ability to industrialize attacks that previously required human expertise at every step. The defenders must adapt with equivalent speed or accept permanent disadvantage.
The investment implication of this lens is clear: defensive infrastructure that operates at machine speed—automated detection, automated response, AI-enabled threat intelligence correlation—is not optional for the enterprise security stack. It is existential. The companies building this infrastructure are addressing mandatory demand that will grow regardless of economic cycles, budget pressures, or technology fashion.
Lens Two: AI as the New World — Unmapped Terrain
The second lens is more unsettling, and in our view, more important for understanding the longer-term dynamics. AI is not just a tool that reshapes existing attack and defense strategies. It is fundamentally new terrain—a vast, partially understood space with its own topology, hazards, and opportunities that the existing security framework was not designed to map.
Think of it this way: early explorers of unfamiliar continents operated without reliable maps, without accurate models of what they would encounter, and with assumptions formed in entirely different geographic contexts that turned out to be systematically wrong. They built settlements in locations that seemed defensible by familiar logic and turned out to be vulnerable in entirely unfamiliar ways. They drew maps that confidently named territories that were entirely misrepresented. They made foundational errors that persisted for generations because the wrong assumptions became institutionalized before the territory was understood.
The AI security landscape is in exactly this phase. Red teams probing model boundaries are discovering that the vulnerability surface of large AI systems is fundamentally different from traditional software vulnerabilities. Prompt injection attacks exploit the inability of language models to reliably distinguish between instructions and data—a class of vulnerability that has no clean analogue in conventional security frameworks. Jailbreaks reveal that model alignment is not a binary property but a probabilistic one that degrades under adversarial pressure in ways that are not fully predictable from the training process. AI-targeted cloaking attacks can present different content to human users and AI crawlers simultaneously, enabling misinformation to be injected into AI knowledge bases while appearing legitimate to human reviewers.
The most dangerous aspect of the new terrain framing is that early assumptions—including the assumptions being institutionalized in enterprise AI security frameworks today—are almost certainly wrong in ways we will not understand until significant failures have occurred. The companies and standards bodies building AI security infrastructure today are operating with the equivalent of a Ptolemaic map of an undiscovered continent. Some of what they build will be directionally correct. Some of it will be confidently wrong in ways that create systemic exposure.
What This Means for Security Investment
The terrain framing has direct implications for how we evaluate security companies operating in the AI space. We are skeptical of companies that claim comprehensive AI security coverage based on frameworks developed for traditional software security. The vulnerability surface is genuinely different, and mapping it requires empirical research, not the application of existing categories to a new domain.
We are constructive on companies that approach AI security with epistemic humility—those building detection and response infrastructure that is designed to evolve as the terrain is better understood, rather than those selling complete solutions to problems that are not yet fully characterized. Red team research organizations, adversarial testing infrastructure for AI systems, and security monitoring platforms designed specifically for AI runtime behavior are areas where we see genuine white space that has not yet been addressed by either incumbents or well-funded startups.
The Talent War and Its Security Implications
Understanding AI's role in security requires acknowledging the extraordinary concentration of talent that drives both offensive and defensive capability in this space. The AI talent market has reached compensation levels that were unthinkable five years ago: documented packages of $100 million for individual researchers reflect a level of winner-take-most dynamics that has no precedent in the history of technology hiring. Mark Zuckerberg's personal recruitment of top AI researchers from competitor organizations—reportedly based on a private list of the highest-value AI researchers globally—illustrates how seriously the largest technology companies view the talent concentration question.
For security specifically, this talent concentration creates a structural asymmetry. The offensive security community—nation-state adversaries, criminal organizations, and gray-market vulnerability research firms—operates with different constraints on compensation and ethics than enterprise security vendors. The researchers building AI-enabled attack capabilities are not bound by the same alignment considerations that constrain defensive tool builders. This is not a new dynamic in security: offensive research has always had different incentive structures than defensive tooling. But AI amplifies the asymmetry, because the foundational models that power offensive capabilities are the same models that defensive tools are built on—and the offensive use cases have fewer guardrails.
The Portfolio Construction Implication
At DataInx, we hold both the weapon lens and the wilderness lens simultaneously in evaluating AI security investments. Companies addressing the arms race escalation—automated detection, AI-enabled threat intelligence, machine-speed response infrastructure—are building against mandatory demand in a market where the need is structurally increasing. We are active investors in this space.
Companies addressing the wilderness mapping challenge—AI-specific vulnerability research, prompt injection detection, AI behavior monitoring, adversarial testing infrastructure—are building in a space where the market is less mature but where the eventual demand may be larger. The enterprise world has not yet fully internalized the scope of AI security risk, but the incidents that will force that reckoning are accumulating. The AI impersonation of senior government officials documented in 2025 is a preview of what large-scale AI-enabled social engineering looks like against corporate targets. When those incidents begin affecting public companies in material ways, the demand for AI-specific security infrastructure will move from early-adopter security teams to mainstream enterprise procurement.
The founders who are building that infrastructure today—with genuine technical depth in AI systems behavior, not just the application of traditional security frameworks to new target systems—are the ones we want to back at the Seed stage. The terrain is still being mapped. The companies that help map it correctly will define the security infrastructure of the next decade.