How To Assess AI-Native Companies Using Sapphire Venture’s 5D Framework
Over the past two years, discussions on AI-native applications have surged, focusing on their potential to transform the enterprise software landscape. While the spotlight has mainly been on AI infrastructure and platforms, not as much has been said about what truly defines AI-native companies and their competitive implications.
Although initial excitement around Generative AI began with applications like ChatGPT in 2022, the so-called “application layer” has lagged behind other—in this case, deeper—parts of the stack. However, this gap is narrowing: according to Sapphire Venture, at least 47 AI-native applications are now generating $25+ in ARR, up from 34 earlier this year—they think that next year an equivalent number of companies will be recording upwards of $50m ARR.
But when every tech company is trying to jump on the AI train, integrating AI into their core product, how do we go about defining an AI-native application, and how do we evaluate them against their competitors? Sapphire—an investor in LinkedIn, DocuSign, Monday.com and at least 30 AI and data companies—has come up with a system that they called “5D Framework” to do just that.
Defining AI-Native Application
According to Sapphire, AI companies, applications and products must satisfy four requirements in order to be considered AI-native:
- They must be built on foundational AI capabilities: such as learning from large datasets, understanding context, or generating novel outputs;
- They must deliver outcomes that break traditional constraints of speed, scale and cost;
- They must be designed to continuously improve, both by leveraging advancements in underlying models and through feedback loops that refine performance using real-world data;
- They must involve some element of proprietary AI technology vs. 100% off-the-shelf capabilities (e.g., fine-tuning open-source models to improve specific features, model orchestration).
The definition of AI-native—clarifies Cathy Gao, Partner at Sapphire Ventures—is meant to be temporary and will gradually become implicit: “just as we rarely say internet-native, cloud-native or mobile-native anymore.”
However, this distinction is useful as long as the technology is novel and entrepreneurs, VCs and everyone can benefit from standardised tools and definitions to assess these new products.
Sapphire’s 5D Framework
Sapphire created the following framework to quantify competitive advantage in this new space, identifying five key dimensions. They say AI companies “will need to differentiate their applications across several of these dimensions to establish durable category leadership, given the already increasing competitive intensity within enterprise software”.

Design
Enterprise software, a multi-trillion-dollar market, has traditionally prioritised functionality over UX, resulting in complex, cluttered interfaces. This is now changing as design emerges as a key differentiator, with GenAI driving innovations in interaction, feedback, and system design.
GenAI enables new interaction models like natural language and multimodal interfaces, making advanced features more accessible and enhancing user engagement. Recent advancements, such as OpenAI’s Canvas and Anthropic’s co-creation tools, point to a shift from chatbots to richer, more intuitive experiences.
Accelerating feedback loops is another critical innovation. Techniques like Reinforcement Learning from Human Feedback (RLHF) and creative user engagement monitoring (e.g., shares, hover time…) help refine AI performance iteratively. AI-native systems combine off-the-shelf AI with proprietary elements, leveraging methods like fine-tuning and model orchestration to optimise cost-performance balance.
Explainability is another feature essential to building trust, with applications like Perplexity.ai and Cognition providing citations, confidence intervals, and granular user interactions. This evolution marks a shift toward highly sophisticated, user-centered AI-native systems.
Data
Data is critical to AI-native applications, transforming foundational AI capabilities into targeted, defensible products that meet customer needs. Success in this space hinges on three key strategies:
- Rigor in Data Management: A strong AI strategy requires robust data practices, including procurement, quality assurance, governance, and security. As multi-modal AI advances, managing structured and unstructured data effectively will be essential. Companies that can rapidly and securely collect, clean, and integrate data will have a competitive edge.
- Leveraging Latent Data: AI-native applications unlock dormant or uncaptured data, such as files in storage systems or insights from calls and meetings. By enabling seamless access, structuring taxonomies, and translating unwritten knowledge into usable data, organisations can extract business value and optimise data architectures for AI.
- Creating Proprietary Data Sets: GenAI allows companies to capture unique, net-new data, such as engagement patterns or metadata, which incumbents lack. This data can improve models and workflows, prioritising quality and relevance over sheer volume.
Examples like Glean, Writer, and Jeeva.ai showcase how tailored data strategies drive innovation and differentiation in AI-native applications.
Domain Expertise
Vertical AI applications are rapidly scaling, leveraging GenAI’s ability to understand deep domain-specific contexts and automate workflows. These applications excel in translating domain-specific activity into AI-accelerated processes, enabling tasks like customer conversation transcription, legal research, and financial analysis to be completed more efficiently. By studying power users, companies are codifying advanced usage patterns into AI outputs, democratising expertise across organisations and levelling up entire teams.
AI-native applications also synthesise vast datasets at scale and speed, allowing users to derive actionable insights in minutes rather than weeks. Legal, healthcare, and financial services exemplify this trend, with tools like Harvey, Abridge, and Supio providing real-time, domain-specific support.
Moreover, these applications uniquely combine global knowledge from foundational models, domain-specific insights from industry databases, and company-specific data to automate workflows and achieve precise outcomes. Examples like Abridge’s clinical note generation and Magic School’s education-focused AI tools showcase the transformative power of vertical AI to revolutionise industry-specific workflows.
Dynamism
Ben Thompson‘s recent post titled “Meta’s AI Abundance” highlights Meta’s potential in leveraging GenAI for dynamic ad creation and personalised content, signalling broader shifts in user expectations for enterprise software. This evolution, driven by GenAI, is marked by three key dimensions:
- Optimising Actions “Under the Hood”: Companies are moving beyond static systems, orchestrating sequences of model interactions to optimise performance and cost in real time. Tools like Martian’s model routers enable flexible infrastructure, while advanced AI capabilities are becoming more autonomous, making decisions on behalf of users to improve efficiency and user experience.
- Creating Generative Customer Journeys: GenAI is transforming enterprise applications into dynamic, adaptive content systems, enabling hyper-personalised sales and marketing collateral, tailored commerce experiences, and even real-time generated user interfaces. Companies like Jeeva envision AI-driven, continuous learning systems that autonomously adapt to user interactions.
- Enabling Hyper-Personalisation: GenAI is unlocking personalised experiences at multiple levels, from individuals to organisations. Examples like Outreach’s custom win models and HeyGen’s conversational video platform illustrate how AI can fine-tune experiences and outputs in real-time.
These advancements redefine dynamism in enterprise software, enhancing flexibility, user engagement, and personalisation.
Distribution
The rise of GenAI is prompting significant experimentation in pricing and packaging, challenging the traditional seat-based SaaS model without fully replacing it. While no dominant model has emerged, companies are adopting varied approaches to balance value delivery and costs while mitigating competitive threats.
- Pricing and Packaging Flexibility: GenAI has created a heterogeneous pricing landscape, with companies embedding GenAI features into existing products (e.g., Workday), introducing premium SKUs, launching standalone applications, and testing consumption- and outcome-based models. Future pricing strategies will likely blend seat-based, consumption-based, and outcome-based models to maximise customer coverage and align pricing with delivered value.
- New Business Models: GenAI is enabling software-enabled services and agent-based systems that deliver measurable business outcomes. Examples include Salesforce and Zendesk’s per-conversation pricing for AI agents, customer service tools like Sierra and Crescendo charging per resolved ticket, and content generation apps like Synthesia pricing per minute of video.
This flexibility signals a shift in how AI-native companies deliver and capture value, emphasizing alignment with customer needs.
What’s Next For AI-Native Apps?
The framework outlined provides a valuable lens for builders and investors to identify AI-driven differentiation at the application layer. While retrofitting existing products with advanced AI capabilities will create value, true breakthroughs will come from entrepreneurs who reimagine workflows, creating seamless, “always-on” multi-modal applications that integrate diverse capabilities into single experiences, potentially priced on a metered basis.
Achieving this future hinges on improvements across the tech stack, including performance optimisation, alignment, compliance, security, cost management, and reducing hallucinations. Encouragingly, many challenges are already known, and significant innovation remains achievable with current model capabilities if costs decline further. While new advancements like GPT-5 will shape expectations, innovation will also stem from reasoning research, agentic systems, and multi-modal models expanding their capabilities.
The pace of experimentation at the application layer is relentless, enabling builders to combine architectures, models, and data sources in unprecedented ways. While progress will be uneven, with many failed experiments, companies demonstrating combinatorial innovation and rapid integration of new capabilities will emerge as transformative leaders in the AI era.
The information available on this page is of a general nature and is not intended to provide specific advice to any individuals or entities. We work hard to ensure this information is accurate at the time of publishing, although there is no guarantee that such information is accurate at the time you read this. We recommend individuals and companies seek professional advice on their circumstances and matters.