Mathieu

Julien

AI

Paul

FAQ

Applikai

Back to website

Product

What is Applikai?

Applikai is a web-based collaborative workspace where humans and ambient AI agents build and evolve mobile applications together. Users work on an infinite canvas and collaborate in real time on a shared application model (intent), not on generated code.

Who is Applikai for?

Teams building mobile and internal/operational apps - product, design, engineering, ops, data, marketing, sales working together under high iteration pressure. Agencies and studios building apps for clients are one of our primary ICPs within this broader segment.

What do users do in Applikai?

Users design screens, define flows, connect data and refine app behavior on a shared canvas. AI agents show up as collaborators (engineering, design, ops, data, marketing, sales, ..) and can proactively suggest or apply changes. The same workspace is used both to create a first version and to iterate on an existing product, including viewing key usage/churn signals and turning them into product changes.

What does "ambient AI" mean?

Ambient AI means agents stay present, maintain context and understand the current application state over time. They can act proactively when needed, but only through constrained, validated operations on the shared model - not free-form code edits.

Why does intent vs syntax matter?

Code is a great execution format, but a poor collaboration format at scale. A shared intent model enables semantic diffs ("flow updated", "screen added"), safer global changes and real-time collaboration that remains coherent as the app evolves.

What problem are you solving?

Most AI dev tools generate or edit source code on demand. That works for prototypes, but becomes brittle once an app needs to evolve across iterations with multiple humans and AI touching the same codebase. Coordination becomes manual work, and the cost of change grows fast.

What is the "shared application model" (and where does the DSL fit)?

Under the hood, every app is represented as a structured model of intent: screens, components, flows, state, rules, data bindings and constraints. We sometimes call this our DSL (domain-specific language), but users do not write it. The model is the system of record that humans and AI agents edit through validated operations.

Why mobile first?

Why start with mobile?

Mobile concentrates the hardest constraints: state, offline behavior, performance, permissions, store submission and compliance. Code-first and on-demand generation approaches break quickly here after the first prototype. If continuous collaboration works for mobile, it generalizes to other app categories.

Are you building native iOS/Android?

Our goal is to produce production-grade mobile apps with access to real mobile capabilities. The exact runtime approach is a product decision (native, cross-platform, or hybrid), but the core differentiation is the shared application model and collaboration layer that drives the app.

What is supported (and not yet) + technical foundations

What can you build in the first usable version?

A focused set of mobile app primitives: common screens, navigation patterns, forms, lists, authentication flows, camera, GPS, background tasks and basic data interactions - optimized for internal/operational apps and long-lived iteration cycles.

What is not fully supported yet?

We will progressively expand coverage. Early on:

  • Not every native capability is available on day one.
  • Not every UI component exists day one - we focus on the most common building blocks first.
  • Enterprise governance features (SSO, audit logs, SOC2) are roadmap items.
  • Store submission automation and compliance helpers are delivered incrementally.

How does real-time collaboration work?

Collaboration is built on model operations (not file merges). Users and agents edit the same structured model, with real-time sync, permissions, history and conflict resolution designed for collaborative editing.

How do AI agents interact with the app?

Agents propose and execute validated operations on the model (for example: "add screen", "update flow", "change rule", "add tracking event"). A validation layer enforces invariants, and the system can support review/rollback workflows.

Do you generate code?

Code and runtime artifacts can be generated or adapted from the model, but the model remains the source of truth. This is what keeps iteration safe and collaboration scalable.

Performance - what do you target?

We design for mobile constraints. We target smooth UI on supported devices and fast startup, with performance depending on the app's complexity, assets and device capabilities. We optimize the engine, caching and updates as the product matures.

Can you push updates without app store reviews?

We follow platform rules. Some changes can be shipped as content/model updates; other changes require a store release. We design the system to keep compliance clear and avoid grey areas.

How do you handle user data?

We minimize sensitive data in the core platform and design for least-privilege access. Enterprise-grade security (SSO, audit logs, SOC2) is on the roadmap as we move from design partners to broader deployments.

Adoption, business model and competition

How do teams adopt Applikai?

The natural motion is bottom-up: one person starts a project, then invites teammates and stakeholders to collaborate on the same canvas. Agencies can also adopt by building client projects and scaling usage across accounts.

How do you make money?

SaaS pricing with a base license (seats/teams) plus usage-based AI (tokens/credits) for heavy agent usage. We expect expansion as teams invite more collaborators and adopt more agent workflows.

Who are your competitors and what's different?

Vibe coding platforms and AI IDEs optimize for generating the first version quickly, but keep code as the source of truth. That makes iteration and collaboration fragile beyond the prototype. Applikai is built around a shared application model, real-time collaboration and ambient agents operating safely on intent, not on code diffs.

-- Copy and paste the text below into your favorite LLM --

Market

What market is Applikai addressing?

Applikai targets the emerging market of AI-native software development tools: products that help teams build and evolve applications with AI, not just generate a first version. We start with mobile, where state, offline behavior, deployment constraints, and multi-stakeholder collaboration make iteration hardest and the need most acute.

Why now?

  • AI coding tools have reached real scale (Lovable and Cursor reporting very large ARR run-rates).
  • Teams are shifting from one-off code generation to continuous, multi-actor development (humans + AI agents).
  • All the current tools still treat code as the source of truth, which works for prototypes but becomes brittle and unscalable as apps evolve.

 

Selected public signals:

  • Lovable: publicly communicated $100M ARR milestone in 8 months.
  • Cursor: publicly reported very large ARR run-rate and major funding rounds.
  • Application development & low-code markets continue to grow (CAGR 2024-2030 : 24% - Grand View Research)

TAM

Spend-based approach

TAM = global spend on tools used to build and evolve applications (AI-native development, app-building and iteration tooling). Mobile is a wedge, not a separate TAM line item.

 

TAM (2025/26): ~$40B-$70B in annual spend.

TAM grows to ~$150B-$270B over the next 5-8 years as low-code and AI-native development expand.

Anchor: Low-code development platforms alone are estimated at ~$37.4B in 2025 (Fortune Business Insights), before adding AI coding tools and adjacent app development spend. (https://www.fortunebusinessinsights.com/low-code-development-platform-market-102972)

 

Key TAM assumptions

  • Paid seat ARPU (license): $25-$75 / seat / month (range across dev tools & collaborative creative tools).
  • AI usage (tokens/credits): $10-$50 / active seat / month (usage varies widely; margins depend on model choice + caching + guardrails).
  • Buying units: solo, teams & agencies first; enterprise expands after (focus on collaboration-heavy, long-lived use cases).

SAM

SAM (initial wedge: mobile-first, collaboration-heavy teams and builders): ~$6B-$15B

This reflects capturing ~15–25% of the TAM initially (mobile + long-lived apps where iteration and coordination costs are highest), with expansion to web/backend/slides/animation workflows later.

SAM focuses on the subset of the market that matches Applikai’s initial product constraints and wedge: mobile-first teams /solo/ founders that iterate frequently and benefit from real-time collaboration + persistent AI agents.

SOM

The SOM is built on two distinct pricing models that map to two distinct segments:

  • individual/team licence,
  • agency per-project (plus an enterprise layer from Year 2).

Each has its own ARPU logic and growth driver.

Revenue model by segment

Model

Pricing

ARPU

+ Token ($30/u/mo)

Total ARPU / yr

Per seat (licence)

$49/seat/mo indiv

$39/seat/mo teams

~$540/seat

+$360/seat

~$900/seat

Base licence

+ per active project

$299/mo base (5 proj.) + $89/proj. · avg. 15 proj. → ~$1,185/mo

~$14,200/agency

+$10,800 (30 users × $30 × 12)

~$25,000/agency

Per seat, annual contract

$200/seat/mo · min 20 seats · SOC2/HIPAA

~$72K (30 seats)

+$10,800 (30 users × $30 × 12)

~$83K ACV

Segment

Solo / Founder

/ SME

Agency / Studio

Enterprise

SOM projection

 

Year 1

Year 2

Year 3

Year 4

ARR range

$5–12M

$48–94M

$213–384M

$657M–1.15B

Figma (ref)

~$10M

~$50M

~$150M

~$400M

Lovable (ref)

~$150M

~$400M

~$800M

~$1.5B+

Key assumptions:

  • agency channel live by Month 9: avg. 15 active projects × 2 active users per project = 30 token-billable users per agency
  • PM segment acquired via bottom-up team adoption (one PM → product team)
  • enterprise from Month 18.

 

Why PMs change the model:

they are team-budget buyers, not personal-card buyers.

A single PM seat converts to a team licence within 3 to 6 months in 50% of cases, making them the highest-leverage bottom-up acquisition vector after agency introductions.

Applikai is designed to onboard PMs without learning a new tool, new habits, and a new workflow.

Why is mobile the right bet versus web for Applikai?

Three structural reasons:

  • The gap is larger: Lovable, Bolt, and base44 are all web-first. No platform has achieved the mobile equivalent of Lovable's scale. Mobile development is 3-5x more complex than web and proportionally more painful to automate . Which is precisely why the opportunity is larger.
  • The unit economics are better: Mobile apps command higher prices, longer retention cycles, and more predictable compute usage than web apps. Better margins for the platform and higher willingness-to-pay from users.
  • The moat is deeper: App Store distribution, native performance (120 FPS, sub-100ms cold start), and built-in telemetry/A/B testing are defensible capabilities tied to Applikai's engine architecture. Web-first competitors cannot replicate them without an existential rewrite.

Applikai, lovable for mobile

How does making Applikai an app engine help you build mobile applications?

How do you reduce App Store rejection risk?

Our app engine has compliant behaviors by default. For example: App Tracking Transparency (ATT) and telemetry.

We auto-generate privacy manifests, nutrition labels, required policies, and permission explanations from actual data flows.

Agents flag risky patterns (aggressive notifications, missing attribution) during building.

Compliance wizards guide users through edge cases such as apps for kids or apps handling financial data.

How does Applikai differ from code-generation tools like Lovable, Bolt, or Cursor?

How does having an app engine enable better apps for the end user?

Consistently high frame rates: We target up to 120 FPS on supported hardware and deliver smooth performance on all devices.

Fast cold starts: An app's typical compressed size (without assets) is a few hundred KB, with cold starts targeted under 100 ms.

Low resource footprint for lower battery consumption: C++ and Metal allow for highly optimized code.

Fluid, native-feeling interactions: Gestures, haptics, keyboard handling, safe areas, dynamic type, VoiceOver/TalkBack, reduced motion, ... all owned by the engine. No stutter.

What can be soft-updated? When a full store push is mandatory?

Applikai is in a grey area of the Apple Store guidelines.

Wireframes, navigation, etc. are fed as data to the engine, not executable or interpreted code.

We choose to enforce a policy stricter than to prevent any issue with Apple review.

We only allow text, images, colors, safe A/B variants of pre-approved elements, and changes that do not alter app behavior.

Other updates go through the classic app review process.

Competition

How easily can the competition copy Applikai?

In short, hard to replicate without major rewrites.

Code-based app builders cannot implement most of Applikai's features or do so as effectively.

Using Git makes agentic collaboration hard by default and impossible for non-technical people.

You cannot "just add AI" on top of an existing product with years of legacy.

For example, Figma could not integrate agentic features into Figma Design and had to ship a completely separate Figma Make.

Competitors will focus on their markets and existing customers, not start a full rewrite that will render all existing projects obsolete.

Why Figma will not compete with Applikai?

Core technical mismatch

  • Browser-based design canvas + .fig file format prevents native mobile runtime execution and packaging
  • No structured entity system or version control adapted for real-time multi-agent editing
  • Web-based code foundation eliminates instant mobile previews and over-the-air updates

 

Figma cannot "just add AI" into its product

  • 1-to-1 export to mobile
  • Figma Make is a different product from Figma Design: the data model of Figma Design was not built for agentic workflows
  • Figma Make and AI features output prototypes. Not a complete mobile app.
  • Entire product, plugins, templates, variables, and collaboration features are built around design handoff.
  • Existing customer files, team processes, and ecosystem (plugins, Dev Mode, FigJam integration) cannot be discarded

 

Strategic reality

  • Talent, culture, and internal processes are built around design tools, not working applications
  • ~$14.5B market cap and ~$1.37B revenue guidance for 2026 (post-IPO reset from 2025 peak) leave no room for a radical pivot to own the full app executable stack

 

Figma attempting to become Applikai would mean abandoning its core identity as the universal design tool, fragmenting its ecosystem of 10M+ designers, and diluting the value of every existing .fig file.

All to enter a market it has never operated in.

Why Lovable will not compete with Applikai?

Core technical mismatch

  • Git prevents real-time multi-user editing
  • Git blocks coordinated work across dozens of AI agents
  • Web-based code foundation eliminates instant mobile previews and over-the-air updates

 

Lovable's legacy inertia

  • 8 million+ users and paying customer projects lock the company into its current architecture
  • The mutable codebase plus every prompt template, diffing agent, and Git tool will stay in place
  • LLM orchestration for web codebases will not be replaced

 

Strategic reality

  • Talent, culture, and internal processes are built to ship web apps
  • Core engineering team has zero experience building production app engines for cross-platform mobile infrastructure
  • $6.6B valuation and $200M ARR (Series B 2025) eliminate any room for a full pivot

 

Lovable copying Applikai means scrapping their architecture, their GTM, and their valuation while rebuilding from zero in a domain they have no experience in.

Technical execution

What is required to have an app engine?

Layout: Vertical, Horizontal, Grid, ...

Components: Button, Email Input, ...

Logic: what to do when an element is clicked, ...

Navigation: the user flow

Animation: transition between screens, fade-ins, fade-outs, ...

Native & hardware features: notifications, camera, ...

Data Model: Users, Images, Messages

What is the technical stack used?

The core of the engine is in C++.

 

For the graphics:

  • WebGL in the browser,
  • Metal on iOS,
  • Vulkan on Android.

 

WebSockets for the real-time networking.

How are UI components implemented without the native SDKs or third-parties like Shadcn?

We implement a core set of primitives in-house, and expand coverage progressively.

We avoid implementing every component in isolation to prevent combinatorial complexity.

Instead of having each UI element as a separate widget, we use a unified component system built from layout, logic, display, and animation properties.

The logic of a component is a set of behaviors (clickable, scrollable, ...).

Every component carries all the properties needed to render any type of UI element.

Each component supports text content, background color, border thickness, ... even if not used.

Components are also composable: a list item will be built from other components.

This makes the component system flexible and powerful.

Additionally, using AI, we can quickly generate many components from a few dozen well-crafted examples.

How will Applikai support additional or custom components?

We chose not to support every possible case to avoid making Applikai overly complex.

Most apps use the same few components (Shadcn UI, for example, has ~60 standard components).

We will roll out components incrementally. There is no need to launch with 300 components.

We will roll out new components based on telemetry, user requests, and agent analysis.

We plan to have multiple agents proactively coding and suggesting new components.

For agencies, or big companies, an enterprise tier can include the development of custom components.

Our ability to roll out components incrementally demonstrates that we are a dedicated team that listens to the community.

What is the current state of the product?

Applikai is a deep-tech product. Not something that can be assembled from off-the-shelf components or generated with a prompt.

Our CTO Julien has been building the core engine solo: the real-time networking layer, the component system, and the agentic harness.

We are raising to move from validated architecture to shipped product: hiring the experienced engineers who will build on top of what Julien has laid down, and accelerating the roadmap with our first design partners.

What components are already implemented?

Vertical/Horizontal layout

Checkbox

Button

Double Slider

Icon

Image

Label

Loader

Progress Bar

Slider

Subtitle

Text

Text Input

Title

Wireframe (root component)

What layout options are already implemented?

Horizontal/vertical alignment (left/center/right + top/center/bottom).

Width and height in pixels, percent, hug, or fill.

Is making an animation system difficult?

In our case, it is straightforward.

An animation is simply an interpolation of a component's properties between two points.

In our component system, all components have all properties (position, size, color, ...).

An animation consists of a component ID, a property ID, a duration, and an easing function.

Why not use Lottie?

We can use the best Lottie player without issue in Applikai.

Lottie files can be easily supported.

Lottie does not handle basic use cases like screen transitions.

Lottie will be integrated after Applikai’s initial release.

How do you access native/hardware features like the camera?

Applikai has a bridge to hardware features.

On iOS, dedicated Swift code handles the camera and can be called from the app. Same for other features.

Most apps use the same limited set of native/hardware features (camera, gallery, permissions, contacts).

We will roll out new native and hardware features over time based on telemetry, user requests, and agent analysis.

We plan to have multiple agents proactively coding and suggesting new native/hardware features.

Most, if not all, low/no-code app builders struggle with complexity to replace code. How do you tackle this issue?

We chose not to support every possible case. Applikai would become a Rube Goldberg machine.

For almost all apps, the logic and operations performed are confined to a simple scope.

Mobile apps also operate on strict conventions that users expect. For example, the back arrow in the top-left corner.

We also divide the complexity of an app into separate, local problems.

The navigation (with screen transition) is a completely separate system from the wireframes for instance.

We use our infinite canvas to showcase information, like a navigation graph.

We believe that dedicated tooling can replace code in almost all cases.

Before AI, the problem was the clunky interface and the tedious hours spent configuring everything that could otherwise be done in 2 lines of code.

With AI, all of it disappears.

The AI translates plain English into whatever is needed.

How do you ensure functional specifications in plain English are precise enough?

Plain English is only an input to the agents.

Applikai's agents transform the plain English input into structured data.

Applikai uses semantic components (button, image, etc.) and structured English (e.g., ‘when [event] happens, do [action]’) for logic.

Applikai uses a navigation graph and other visual tooling.

The user speaks in English and receives structured, understandable, and auditable data.

How are built-in A/B testing and telemetry possible?

Applikai sees every tap, every interaction.

Applikai also knows all your screens, your navigation graphs and, and the A/B variants.

The app engine handles the ATT (App Tracking Transparency).

What prior deep-tech projects demonstrate the team’s ability to build Applikai?

Julien and Antoine were co-workers at Tempow. They shipped high-end, highly performant Android kernel Bluetooth drivers to 50M+ mobiles.

Antoine implemented the LC3 Bluetooth Audio Codec for Google embbeded

Agentic AI

What are the agentic workflows of Applikai?

We use an Orchestrator/Worker/Judge model.

Orchestrators produce a task graph, workers execute the tasks in the correct order and judges validate.

We run Claude-powered agents (or equivalent LLMs) in dedicated VMs, each with full access to project data.

How do you solve the multiple agent coordination issue?

With Git, the collaboration is very hard.

This is also true for humans without any agents.

Git is asynchronous. You get large edits that involve merging with a conflict resolution step.

We use a real-time model. No branch, no asynchronous editing.

Users and agents edit an app by submitting small deltas: Adding a button, changing a color, ...

A delta has a defined target and action.

This makes the agent-to-agent, agent-to-user, and user-to-user collaboration trivial compared to Git.

Adoption

Why would agencies adopt Applikai?

The real risk is not generic switching cost, it's workflow inertia.

Agencies have a Figma → handoff → dev pipeline that their clients have accepted, their developers know, and their project managers can estimate.

Disrupting that is not a tooling decision. It is a strategic one.

 

We do not win by asking to replace existing workflow.

We win by inserting into it at the point of highest pain.

The specific moment agencies hate most is the gap between design and the working app: where pixel-perfect Figma files become imperfect React Native implementations, where "it looks different on my phone," where A/B testing requires wiring up a third-party SDK.

That gap costs time, budget, and client trust on every single project.

 

Applikai eliminates that gap structurally, not just incrementally.

The design is the app. There is no handoff.

 

So the adoption conversation with an agency isn't "abandon your stack."

It is "what if your next greenfield project shipped 40% faster with zero design-to-dev translation loss?"

That's a one-project trial, not an existential commitment. Agencies run parallel stacks all the time.

 

Rocapine and Quiet are design partners precisely because they're high-volume studios.

They ship many apps, they feel this pain on every project, and they have the appetite to try a new tool without betting their entire business on it.

How do you overcome designers’ resistance to leaving Figma?

We are following the same strategy Figma used to displace Sketch.

We offer features that deliver a 10× improvement in the overall experience, including but not limited to:

  • Fully functional wireframes in desktop editor
  • Instant sync desktop editor / mobile device
  • Pixel-perfect apps by default
  • Multiple simultaneous AI agentic workflows

We ship updates regularly and work closely with our users.

We demonstrate a technically superior product.

 

We also have the same monenclature (ex: fill, hug)

How do you overcome developers’ resistance to leaving IDEs and Git?

First, a reframe: developers are not the primary decision-maker in Applikai's adoption path.

A founder, PM, or agency lead chooses Applikai.

The developer's job then shifts, and that shift is actually an upgrade.

 

In a traditional mobile stack, developers spend a disproportionate amount of time on work that has nothing to do with engineering: translating Figma files into code pixel by pixel, debugging layout inconsistencies across devices, wiring up analytics SDKs, managing App Store submissions, resolving merge conflicts from design changes.

These are not interesting problems. They are friction.

 

The developers who will resist Applikai are the ones whose identity is tied to the craft of writing Swift or Kotlin by hand.

That's a real group, and they are not our early adopters.

Our early adopters, who are developers, know the pain of shipping apps on mobile.

How do you mitigate vendor lock-in concerns for agencies and enterprises?

Lock-in fear comes from two distinct places:

The first is capability lock-in: "what if Applikai cannot build what my client needs in 6 months?"

Our answer is transparency: we publish our component roadmap publicly, we commit to specific release milestones with design partners, and enterprise contracts include a clause that allows custom component development at a fixed rate.

The second is data and portability lock-in: "what if we want to leave?"

We are evaluating open-sourcing the engine runtime as a concrete commitment to our customers.

If we disappear, your app keeps running and your data is readable without us.

 

The best lock-in protection is a product that keeps getting better faster than the cost of switching.

That's what we're building, and it's what our design partner relationships are designed to stress-test right now.

How do you overcome the “yet another standard” perception in a crowded AI app-builder market?

Most competitors output code and cannot deliver real-time device testing, soft updates, built-in telemetry & A/B, or Figma-style multi-agent collaboration.

Unique app-engine architecture creates a permanent moat; Lovable, Figma, and others cannot integrate these features without complete overhaul.

Design partners (including Rocapine and Quiet) and proven demand (Lovable’s scale) validate real production need from day one.

How do you ease users into Applikai without a steep learning curve?

We closely mirror existing interfaces throughout the product: designers have a familiar Figma-like interface with the same nomenclature.

Our agents interact with Applikai exactly like a user would. They select wireframes, open menus, change properties, etc. with a mouse cursor.

You can ask an agent to show you how to perform a task.

In addition, all tasks can be triggered with plain English to be processed by agents.

There is no need to know how to add a condition to a navigation edge: just tell the agents "When user is logged in, go to screen X, otherwise Y" while selecting the starting screen.

AI models

What happens to your moat if frontier AI models make code generation trivial and error-free?

We are designing for it: Better models accelerate us more than they threaten us.

Every capability improvement in code generation translates directly into better agents inside Applikai.

We are the orchestration layer and runtime, not a code generator.

The smarter the models get, the more valuable a structured, auditable, conflict-free environment becomes to run them in.

 

Code generation getting better does not make Applikai obsolete:

  • It doesn't give you live on-device sync.
  • It doesn't give you over-the-air updates.
  • It doesn't give you native 120 FPS rendering.
  • It doesn't eliminate merge conflicts when five agents are working in parallel.
  • It doesn't give you built-in A/B testing and telemetry without third-party integrations.

 

The analogy is Unreal Engine vs. AI-generated 3D assets.

Better AI tools generate better assets faster. Studios still need a game engine to run them.

Nobody argues that Nano Banana makes Unreal Engine obsolete.

Applikai is the engine. Models are the asset pipeline.

What happens to your moat if frontier AI can directly generate binaries?

Applikai minimizes as much as possible the use of code.

Applikai encodes the functional spec as structured English (e.g., ‘when [event] happens, do [action]’).

Applikai is designed with such models in mind.

Mathieu

Julien

AI

Paul

FAQ

Applikai

Back to website

Product

What is Applikai?

Applikai is a web-based collaborative workspace where humans and ambient AI agents build and evolve mobile applications together. Users work on an infinite canvas and collaborate in real time on a shared application model (intent), not on generated code.

Who is Applikai for?

Teams building mobile and internal/operational apps - product, design, engineering, ops, data, marketing, sales working together under high iteration pressure. Agencies and studios building apps for clients are one of our primary ICPs within this broader segment.

What do users do in Applikai?

Users design screens, define flows, connect data and refine app behavior on a shared canvas. AI agents show up as collaborators (engineering, design, ops, data, marketing, sales, ..) and can proactively suggest or apply changes. The same workspace is used both to create a first version and to iterate on an existing product, including viewing key usage/churn signals and turning them into product changes.

What does "ambient AI" mean?

Ambient AI means agents stay present, maintain context and understand the current application state over time. They can act proactively when needed, but only through constrained, validated operations on the shared model - not free-form code edits.

Why does intent vs syntax matter?

Code is a great execution format, but a poor collaboration format at scale. A shared intent model enables semantic diffs ("flow updated", "screen added"), safer global changes and real-time collaboration that remains coherent as the app evolves.

What problem are you solving?

Most AI dev tools generate or edit source code on demand. That works for prototypes, but becomes brittle once an app needs to evolve across iterations with multiple humans and AI touching the same codebase. Coordination becomes manual work, and the cost of change grows fast.

What is the "shared application model" (and where does the DSL fit)?

Under the hood, every app is represented as a structured model of intent: screens, components, flows, state, rules, data bindings and constraints. We sometimes call this our DSL (domain-specific language), but users do not write it. The model is the system of record that humans and AI agents edit through validated operations.

Why mobile first?

Why start with mobile?

Mobile concentrates the hardest constraints: state, offline behavior, performance, permissions, store submission and compliance. Code-first and on-demand generation approaches break quickly here after the first prototype. If continuous collaboration works for mobile, it generalizes to other app categories.

Are you building native iOS/Android?

Our goal is to produce production-grade mobile apps with access to real mobile capabilities. The exact runtime approach is a product decision (native, cross-platform, or hybrid), but the core differentiation is the shared application model and collaboration layer that drives the app.

What is supported (and not yet) + technical foundations

What can you build in the first usable version?

A focused set of mobile app primitives: common screens, navigation patterns, forms, lists, authentication flows, camera, GPS, background tasks and basic data interactions - optimized for internal/operational apps and long-lived iteration cycles.

What is not fully supported yet?

We will progressively expand coverage. Early on:

  • Not every native capability is available on day one.
  • Not every UI component exists day one - we focus on the most common building blocks first.
  • Enterprise governance features (SSO, audit logs, SOC2) are roadmap items.
  • Store submission automation and compliance helpers are delivered incrementally.

How does real-time collaboration work?

Collaboration is built on model operations (not file merges). Users and agents edit the same structured model, with real-time sync, permissions, history and conflict resolution designed for collaborative editing.

How do AI agents interact with the app?

Agents propose and execute validated operations on the model (for example: "add screen", "update flow", "change rule", "add tracking event"). A validation layer enforces invariants, and the system can support review/rollback workflows.

Do you generate code?

Code and runtime artifacts can be generated or adapted from the model, but the model remains the source of truth. This is what keeps iteration safe and collaboration scalable.

Performance - what do you target?

We design for mobile constraints. We target smooth UI on supported devices and fast startup, with performance depending on the app's complexity, assets and device capabilities. We optimize the engine, caching and updates as the product matures.

Can you push updates without app store reviews?

We follow platform rules. Some changes can be shipped as content/model updates; other changes require a store release. We design the system to keep compliance clear and avoid grey areas.

How do you handle user data?

We minimize sensitive data in the core platform and design for least-privilege access. Enterprise-grade security (SSO, audit logs, SOC2) is on the roadmap as we move from design partners to broader deployments.

Adoption, business model and competition

How do teams adopt Applikai?

The natural motion is bottom-up: one person starts a project, then invites teammates and stakeholders to collaborate on the same canvas. Agencies can also adopt by building client projects and scaling usage across accounts.

How do you make money?

SaaS pricing with a base license (seats/teams) plus usage-based AI (tokens/credits) for heavy agent usage. We expect expansion as teams invite more collaborators and adopt more agent workflows.

Who are your competitors and what's different?

Vibe coding platforms and AI IDEs optimize for generating the first version quickly, but keep code as the source of truth. That makes iteration and collaboration fragile beyond the prototype. Applikai is built around a shared application model, real-time collaboration and ambient agents operating safely on intent, not on code diffs.

-- Copy and paste the text below into your favorite LLM --

Market

What market is Applikai addressing?

Applikai targets the emerging market of AI-native software development tools: products that help teams build and evolve applications with AI, not just generate a first version. We start with mobile, where state, offline behavior, deployment constraints, and multi-stakeholder collaboration make iteration hardest and the need most acute.

Why now?

  • AI coding tools have reached real scale (Lovable and Cursor reporting very large ARR run-rates).
  • Teams are shifting from one-off code generation to continuous, multi-actor development (humans + AI agents).
  • All the current tools still treat code as the source of truth, which works for prototypes but becomes brittle and unscalable as apps evolve.

 

Selected public signals:

  • Lovable: publicly communicated $100M ARR milestone in 8 months.
  • Cursor: publicly reported very large ARR run-rate and major funding rounds.
  • Application development & low-code markets continue to grow (CAGR 2024-2030 : 24% - Grand View Research)

TAM

Spend-based approach

TAM = global spend on tools used to build and evolve applications (AI-native development, app-building and iteration tooling). Mobile is a wedge, not a separate TAM line item.

 

TAM (2025/26): ~$40B-$70B in annual spend.

TAM grows to ~$150B-$270B over the next 5-8 years as low-code and AI-native development expand.

Anchor: Low-code development platforms alone are estimated at ~$37.4B in 2025 (Fortune Business Insights), before adding AI coding tools and adjacent app development spend. (https://www.fortunebusinessinsights.com/low-code-development-platform-market-102972)

 

Key TAM assumptions

  • Paid seat ARPU (license): $25-$75 / seat / month (range across dev tools & collaborative creative tools).
  • AI usage (tokens/credits): $10-$50 / active seat / month (usage varies widely; margins depend on model choice + caching + guardrails).
  • Buying units: solo, teams & agencies first; enterprise expands after (focus on collaboration-heavy, long-lived use cases).

SAM

SAM (initial wedge: mobile-first, collaboration-heavy teams and builders): ~$6B-$15B

This reflects capturing ~15–25% of the TAM initially (mobile + long-lived apps where iteration and coordination costs are highest), with expansion to web/backend/slides/animation workflows later.

SAM focuses on the subset of the market that matches Applikai’s initial product constraints and wedge: mobile-first teams /solo/ founders that iterate frequently and benefit from real-time collaboration + persistent AI agents.

SOM

The SOM is built on two distinct pricing models that map to two distinct segments:

  • individual/team licence,
  • agency per-project (plus an enterprise layer from Year 2).

Each has its own ARPU logic and growth driver.

Revenue model by segment

Model

Pricing

ARPU

+ Token ($30/u/mo)

Total ARPU / yr

Per seat (licence)

$49/seat/mo indiv

$39/seat/mo teams

~$540/seat

+$360/seat

~$900/seat

Base licence

+ per active project

$299/mo base (5 proj.) + $89/proj. · avg. 15 proj. → ~$1,185/mo

~$14,200/agency

+$10,800 (30 users × $30 × 12)

~$25,000/agency

Per seat, annual contract

$200/seat/mo · min 20 seats · SOC2/HIPAA

~$72K (30 seats)

+$10,800 (30 users × $30 × 12)

~$83K ACV

Segment

Solo / Founder

/ SME

Agency / Studio

Enterprise

SOM projection

 

Year 1

Year 2

Year 3

Year 4

ARR range

$5–12M

$48–94M

$213–384M

$657M–1.15B

Figma (ref)

~$10M

~$50M

~$150M

~$400M

Lovable (ref)

~$150M

~$400M

~$800M

~$1.5B+

Key assumptions:

  • agency channel live by Month 9: avg. 15 active projects × 2 active users per project = 30 token-billable users per agency
  • PM segment acquired via bottom-up team adoption (one PM → product team)
  • enterprise from Month 18.

 

Why PMs change the model:

they are team-budget buyers, not personal-card buyers.

A single PM seat converts to a team licence within 3 to 6 months in 50% of cases, making them the highest-leverage bottom-up acquisition vector after agency introductions.

Applikai is designed to onboard PMs without learning a new tool, new habits, and a new workflow.

Why is mobile the right bet versus web for Applikai?

Three structural reasons:

  • The gap is larger: Lovable, Bolt, and base44 are all web-first. No platform has achieved the mobile equivalent of Lovable's scale. Mobile development is 3-5x more complex than web and proportionally more painful to automate . Which is precisely why the opportunity is larger.
  • The unit economics are better: Mobile apps command higher prices, longer retention cycles, and more predictable compute usage than web apps. Better margins for the platform and higher willingness-to-pay from users.
  • The moat is deeper: App Store distribution, native performance (120 FPS, sub-100ms cold start), and built-in telemetry/A/B testing are defensible capabilities tied to Applikai's engine architecture. Web-first competitors cannot replicate them without an existential rewrite.

Applikai, lovable for mobile

How does Applikai differ from code-generation tools like Lovable, Bolt, or Cursor?

How does making Applikai an app engine help you build mobile applications?

How do you reduce App Store rejection risk?

Our app engine has compliant behaviors by default. For example: App Tracking Transparency (ATT) and telemetry.

We auto-generate privacy manifests, nutrition labels, required policies, and permission explanations from actual data flows.

Agents flag risky patterns (aggressive notifications, missing attribution) during building.

Compliance wizards guide users through edge cases such as apps for kids or apps handling financial data.

How does having an app engine enable better apps for the end user?

Consistently high frame rates: We target up to 120 FPS on supported hardware and deliver smooth performance on all devices.

Fast cold starts: An app's typical compressed size (without assets) is a few hundred KB, with cold starts targeted under 100 ms.

Low resource footprint for lower battery consumption: C++ and Metal allow for highly optimized code.

Fluid, native-feeling interactions: Gestures, haptics, keyboard handling, safe areas, dynamic type, VoiceOver/TalkBack, reduced motion, ... all owned by the engine. No stutter.

What can be soft-updated? When a full store push is mandatory?

Applikai is in a grey area of the Apple Store guidelines.

Wireframes, navigation, etc. are fed as data to the engine, not executable or interpreted code.

We choose to enforce a policy stricter than to prevent any issue with Apple review.

We only allow text, images, colors, safe A/B variants of pre-approved elements, and changes that do not alter app behavior.

Other updates go through the classic app review process.

Competition

How easily can the competition copy Applikai?

In short, hard to replicate without major rewrites.

Code-based app builders cannot implement most of Applikai's features or do so as effectively.

Using Git makes agentic collaboration hard by default and impossible for non-technical people.

You cannot "just add AI" on top of an existing product with years of legacy.

For example, Figma could not integrate agentic features into Figma Design and had to ship a completely separate Figma Make.

Competitors will focus on their markets and existing customers, not start a full rewrite that will render all existing projects obsolete.

Why Figma will not compete with Applikai?

Core technical mismatch

  • Browser-based design canvas + .fig file format prevents native mobile runtime execution and packaging
  • No structured entity system or version control adapted for real-time multi-agent editing
  • Web-based code foundation eliminates instant mobile previews and over-the-air updates

 

Figma cannot "just add AI" into its product

  • 1-to-1 export to mobile
  • Figma Make is a different product from Figma Design: the data model of Figma Design was not built for agentic workflows
  • Figma Make and AI features output prototypes. Not a complete mobile app.
  • Entire product, plugins, templates, variables, and collaboration features are built around design handoff.
  • Existing customer files, team processes, and ecosystem (plugins, Dev Mode, FigJam integration) cannot be discarded

 

Strategic reality

  • Talent, culture, and internal processes are built around design tools, not working applications
  • ~$14.5B market cap and ~$1.37B revenue guidance for 2026 (post-IPO reset from 2025 peak) leave no room for a radical pivot to own the full app executable stack

 

Figma attempting to become Applikai would mean abandoning its core identity as the universal design tool, fragmenting its ecosystem of 10M+ designers, and diluting the value of every existing .fig file.

All to enter a market it has never operated in.

Why Lovable will not compete with Applikai?

Core technical mismatch

  • Git prevents real-time multi-user editing
  • Git blocks coordinated work across dozens of AI agents
  • Web-based code foundation eliminates instant mobile previews and over-the-air updates

 

Lovable's legacy inertia

  • 8 million+ users and paying customer projects lock the company into its current architecture
  • The mutable codebase plus every prompt template, diffing agent, and Git tool will stay in place
  • LLM orchestration for web codebases will not be replaced

 

Strategic reality

  • Talent, culture, and internal processes are built to ship web apps
  • Core engineering team has zero experience building production app engines for cross-platform mobile infrastructure
  • $6.6B valuation and $200M ARR (Series B 2025) eliminate any room for a full pivot

 

Lovable copying Applikai means scrapping their architecture, their GTM, and their valuation while rebuilding from zero in a domain they have no experience in.

Technical execution

What is required to have an app engine?

Layout: Vertical, Horizontal, Grid, ...

Components: Button, Email Input, ...

Logic: what to do when an element is clicked, ...

Navigation: the user flow

Animation: transition between screens, fade-ins, fade-outs, ...

Native & hardware features: notifications, camera, ...

Data Model: Users, Images, Messages

What is the technical stack used?

The core of the engine is in C++.

 

For the graphics:

  • WebGL in the browser,
  • Metal on iOS,
  • Vulkan on Android.

 

WebSockets for the real-time networking.

How are UI components implemented without the native SDKs or third-parties like Shadcn?

We implement a core set of primitives in-house, and expand coverage progressively.

We avoid implementing every component in isolation to prevent combinatorial complexity.

Instead of having each UI element as a separate widget, we use a unified component system built from layout, logic, display, and animation properties.

The logic of a component is a set of behaviors (clickable, scrollable, ...).

Every component carries all the properties needed to render any type of UI element.

Each component supports text content, background color, border thickness, ... even if not used.

Components are also composable: a list item will be built from other components.

This makes the component system flexible and powerful.

Additionally, using AI, we can quickly generate many components from a few dozen well-crafted examples.

How will Applikai support additional or custom components?

We chose not to support every possible case to avoid making Applikai overly complex.

Most apps use the same few components (Shadcn UI, for example, has ~60 standard components).

We will roll out components incrementally. There is no need to launch with 300 components.

We will roll out new components based on telemetry, user requests, and agent analysis.

We plan to have multiple agents proactively coding and suggesting new components.

For agencies, or big companies, an enterprise tier can include the development of custom components.

Our ability to roll out components incrementally demonstrates that we are a dedicated team that listens to the community.

What is the current state of the product?

Applikai is a deep-tech product. Not something that can be assembled from off-the-shelf components or generated with a prompt.

Our CTO Julien has been building the core engine solo: the real-time networking layer, the component system, and the agentic harness.

We are raising to move from validated architecture to shipped product: hiring the experienced engineers who will build on top of what Julien has laid down, and accelerating the roadmap with our first design partners.

What components are already implemented?

Vertical/Horizontal layout

Checkbox

Button

Double Slider

Icon

Image

Label

Loader

Progress Bar

Slider

Subtitle

Text

Text Input

Title

Wireframe (root component)

What layout options are already implemented?

Horizontal/vertical alignment (left/center/right + top/center/bottom).

Width and height in pixels, percent, hug, or fill.

Is making an animation system difficult?

In our case, it is straightforward.

An animation is simply an interpolation of a component's properties between two points.

In our component system, all components have all properties (position, size, color, ...).

An animation consists of a component ID, a property ID, a duration, and an easing function.

Why not use Lottie?

We can use the best Lottie player without issue in Applikai.

Lottie files can be easily supported.

Lottie does not handle basic use cases like screen transitions.

Lottie will be integrated after Applikai’s initial release.

How do you access native/hardware features like the camera?

Applikai has a bridge to hardware features.

On iOS, dedicated Swift code handles the camera and can be called from the app. Same for other features.

Most apps use the same limited set of native/hardware features (camera, gallery, permissions, contacts).

We will roll out new native and hardware features over time based on telemetry, user requests, and agent analysis.

We plan to have multiple agents proactively coding and suggesting new native/hardware features.

Most, if not all, low/no-code app builders struggle with complexity to replace code. How do you tackle this issue?

We chose not to support every possible case. Applikai would become a Rube Goldberg machine.

For almost all apps, the logic and operations performed are confined to a simple scope.

Mobile apps also operate on strict conventions that users expect. For example, the back arrow in the top-left corner.

We also divide the complexity of an app into separate, local problems.

The navigation (with screen transition) is a completely separate system from the wireframes for instance.

We use our infinite canvas to showcase information, like a navigation graph.

We believe that dedicated tooling can replace code in almost all cases.

Before AI, the problem was the clunky interface and the tedious hours spent configuring everything that could otherwise be done in 2 lines of code.

With AI, all of it disappears.

The AI translates plain English into whatever is needed.

How do you ensure functional specifications in plain English are precise enough?

Plain English is only an input to the agents.

Applikai's agents transform the plain English input into structured data.

Applikai uses semantic components (button, image, etc.) and structured English (e.g., ‘when [event] happens, do [action]’) for logic.

Applikai uses a navigation graph and other visual tooling.

The user speaks in English and receives structured, understandable, and auditable data.

How are built-in A/B testing and telemetry possible?

Applikai sees every tap, every interaction.

Applikai also knows all your screens, your navigation graphs and, and the A/B variants.

The app engine handles the ATT (App Tracking Transparency).

What prior deep-tech projects demonstrate the team’s ability to build Applikai?

Julien and Antoine were co-workers at Tempow. They shipped high-end, highly performant Android kernel Bluetooth drivers to 50M+ mobiles.

Antoine implemented the LC3 Bluetooth Audio Codec for Google embbeded

Agentic AI

What are the agentic workflows of Applikai?

We use an Orchestrator/Worker/Judge model.

Orchestrators produce a task graph, workers execute the tasks in the correct order and judges validate.

We run Claude-powered agents (or equivalent LLMs) in dedicated VMs, each with full access to project data.

How do you solve the multiple agent coordination issue?

With Git, the collaboration is very hard.

This is also true for humans without any agents.

Git is asynchronous. You get large edits that involve merging with a conflict resolution step.

We use a real-time model. No branch, no asynchronous editing.

Users and agents edit an app by submitting small deltas: Adding a button, changing a color, ...

A delta has a defined target and action.

This makes the agent-to-agent, agent-to-user, and user-to-user collaboration trivial compared to Git.

Adoption

Why would agencies adopt Applikai?

The real risk is not generic switching cost, it's workflow inertia.

Agencies have a Figma → handoff → dev pipeline that their clients have accepted, their developers know, and their project managers can estimate.

Disrupting that is not a tooling decision. It is a strategic one.

 

We do not win by asking to replace existing workflow.

We win by inserting into it at the point of highest pain.

The specific moment agencies hate most is the gap between design and the working app: where pixel-perfect Figma files become imperfect React Native implementations, where "it looks different on my phone," where A/B testing requires wiring up a third-party SDK.

That gap costs time, budget, and client trust on every single project.

 

Applikai eliminates that gap structurally, not just incrementally.

The design is the app. There is no handoff.

 

So the adoption conversation with an agency isn't "abandon your stack."

It is "what if your next greenfield project shipped 40% faster with zero design-to-dev translation loss?"

That's a one-project trial, not an existential commitment. Agencies run parallel stacks all the time.

 

Rocapine and Quiet are design partners precisely because they're high-volume studios.

They ship many apps, they feel this pain on every project, and they have the appetite to try a new tool without betting their entire business on it.

How do you overcome designers’ resistance to leaving Figma?

We are following the same strategy Figma used to displace Sketch.

We offer features that deliver a 10× improvement in the overall experience, including but not limited to:

  • Fully functional wireframes in desktop editor
  • Instant sync desktop editor / mobile device
  • Pixel-perfect apps by default
  • Multiple simultaneous AI agentic workflows

We ship updates regularly and work closely with our users.

We demonstrate a technically superior product.

 

We also have the same monenclature (ex: fill, hug)

How do you overcome developers’ resistance to leaving IDEs and Git?

First, a reframe: developers are not the primary decision-maker in Applikai's adoption path.

A founder, PM, or agency lead chooses Applikai.

The developer's job then shifts, and that shift is actually an upgrade.

 

In a traditional mobile stack, developers spend a disproportionate amount of time on work that has nothing to do with engineering: translating Figma files into code pixel by pixel, debugging layout inconsistencies across devices, wiring up analytics SDKs, managing App Store submissions, resolving merge conflicts from design changes.

These are not interesting problems. They are friction.

 

The developers who will resist Applikai are the ones whose identity is tied to the craft of writing Swift or Kotlin by hand.

That's a real group, and they are not our early adopters.

Our early adopters, who are developers, know the pain of shipping apps on mobile.

How do you mitigate vendor lock-in concerns for agencies and enterprises?

Lock-in fear comes from two distinct places:

The first is capability lock-in: "what if Applikai cannot build what my client needs in 6 months?"

Our answer is transparency: we publish our component roadmap publicly, we commit to specific release milestones with design partners, and enterprise contracts include a clause that allows custom component development at a fixed rate.

The second is data and portability lock-in: "what if we want to leave?"

We are evaluating open-sourcing the engine runtime as a concrete commitment to our customers.

If we disappear, your app keeps running and your data is readable without us.

 

The best lock-in protection is a product that keeps getting better faster than the cost of switching.

That's what we're building, and it's what our design partner relationships are designed to stress-test right now.

How do you overcome the “yet another standard” perception in a crowded AI app-builder market?

Most competitors output code and cannot deliver real-time device testing, soft updates, built-in telemetry & A/B, or Figma-style multi-agent collaboration.

Unique app-engine architecture creates a permanent moat; Lovable, Figma, and others cannot integrate these features without complete overhaul.

Design partners (including Rocapine and Quiet) and proven demand (Lovable’s scale) validate real production need from day one.

How do you ease users into Applikai without a steep learning curve?

We closely mirror existing interfaces throughout the product: designers have a familiar Figma-like interface with the same nomenclature.

Our agents interact with Applikai exactly like a user would. They select wireframes, open menus, change properties, etc. with a mouse cursor.

You can ask an agent to show you how to perform a task.

In addition, all tasks can be triggered with plain English to be processed by agents.

There is no need to know how to add a condition to a navigation edge: just tell the agents "When user is logged in, go to screen X, otherwise Y" while selecting the starting screen.

AI models

What happens to your moat if frontier AI models make code generation trivial and error-free?

We are designing for it: Better models accelerate us more than they threaten us.

Every capability improvement in code generation translates directly into better agents inside Applikai.

We are the orchestration layer and runtime, not a code generator.

The smarter the models get, the more valuable a structured, auditable, conflict-free environment becomes to run them in.

 

Code generation getting better does not make Applikai obsolete:

  • It doesn't give you live on-device sync.
  • It doesn't give you over-the-air updates.
  • It doesn't give you native 120 FPS rendering.
  • It doesn't eliminate merge conflicts when five agents are working in parallel.
  • It doesn't give you built-in A/B testing and telemetry without third-party integrations.

 

The analogy is Unreal Engine vs. AI-generated 3D assets.

Better AI tools generate better assets faster. Studios still need a game engine to run them.

Nobody argues that Nano Banana makes Unreal Engine obsolete.

Applikai is the engine. Models are the asset pipeline.

What happens to your moat if frontier AI can directly generate binaries?

Applikai minimizes as much as possible the use of code.

Applikai encodes the functional spec as structured English (e.g., ‘when [event] happens, do [action]’).

Applikai is designed with such models in mind.

Mathieu

Julien

AI

Paul

FAQ

Applikai

Back to website

Product

What is Applikai?

Applikai is a web-based collaborative workspace where humans and ambient AI agents build and evolve mobile applications together. Users work on an infinite canvas and collaborate in real time on a shared application model (intent), not on generated code.

Who is Applikai for?

Teams building mobile and internal/operational apps - product, design, engineering, ops, data, marketing, sales working together under high iteration pressure. Agencies and studios building apps for clients are one of our primary ICPs within this broader segment.

What do users do in Applikai?

Users design screens, define flows, connect data and refine app behavior on a shared canvas. AI agents show up as collaborators (engineering, design, ops, data, marketing, sales, ..) and can proactively suggest or apply changes. The same workspace is used both to create a first version and to iterate on an existing product, including viewing key usage/churn signals and turning them into product changes.

What is the "shared application model" (and where does the DSL fit)?

Under the hood, every app is represented as a structured model of intent: screens, components, flows, state, rules, data bindings and constraints. We sometimes call this our DSL (domain-specific language), but users do not write it. The model is the system of record that humans and AI agents edit through validated operations.

What problem are you solving?

Most AI dev tools generate or edit source code on demand. That works for prototypes, but becomes brittle once an app needs to evolve across iterations with multiple humans and AI touching the same codebase. Coordination becomes manual work, and the cost of change grows fast.

Why does intent vs syntax matter?

Code is a great execution format, but a poor collaboration format at scale. A shared intent model enables semantic diffs ("flow updated", "screen added"), safer global changes and real-time collaboration that remains coherent as the app evolves.

What does "ambient AI" mean?

Ambient AI means agents stay present, maintain context and understand the current application state over time. They can act proactively when needed, but only through constrained, validated operations on the shared model - not free-form code edits.

Why mobile first?

Why start with mobile?

Mobile concentrates the hardest constraints: state, offline behavior, performance, permissions, store submission and compliance. Code-first and on-demand generation approaches break quickly here after the first prototype. If continuous collaboration works for mobile, it generalizes to other app categories.

Are you building native iOS/Android?

Our goal is to produce production-grade mobile apps with access to real mobile capabilities. The exact runtime approach is a product decision (native, cross-platform, or hybrid), but the core differentiation is the shared application model and collaboration layer that drives the app.

What is supported (and not yet) + technical foundations

What can you build in the first usable version?

A focused set of mobile app primitives: common screens, navigation patterns, forms, lists, authentication flows, camera, GPS, background tasks and basic data interactions - optimized for internal/operational apps and long-lived iteration cycles.

What is not fully supported yet?

We will progressively expand coverage. Early on:

  • Not every native capability is available on day one.
  • Not every UI component exists day one - we focus on the most common building blocks first.
  • Enterprise governance features (SSO, audit logs, SOC2) are roadmap items.
  • Store submission automation and compliance helpers are delivered incrementally.

How does real-time collaboration work?

Collaboration is built on model operations (not file merges). Users and agents edit the same structured model, with real-time sync, permissions, history and conflict resolution designed for collaborative editing.

How do AI agents interact with the app?

Agents propose and execute validated operations on the model (for example: "add screen", "update flow", "change rule", "add tracking event"). A validation layer enforces invariants, and the system can support review/rollback workflows.

Do you generate code?

Code and runtime artifacts can be generated or adapted from the model, but the model remains the source of truth. This is what keeps iteration safe and collaboration scalable.

Performance - what do you target?

We design for mobile constraints. We target smooth UI on supported devices and fast startup, with performance depending on the app's complexity, assets and device capabilities. We optimize the engine, caching and updates as the product matures.

Can you push updates without app store reviews?

We follow platform rules. Some changes can be shipped as content/model updates; other changes require a store release. We design the system to keep compliance clear and avoid grey areas.

How do you handle user data?

We minimize sensitive data in the core platform and design for least-privilege access. Enterprise-grade security (SSO, audit logs, SOC2) is on the roadmap as we move from design partners to broader deployments.

Adoption, business model and competition

How do teams adopt Applikai?

The natural motion is bottom-up: one person starts a project, then invites teammates and stakeholders to collaborate on the same canvas. Agencies can also adopt by building client projects and scaling usage across accounts.

How do you make money?

SaaS pricing with a base license (seats/teams) plus usage-based AI (tokens/credits) for heavy agent usage. We expect expansion as teams invite more collaborators and adopt more agent workflows.

Who are your competitors and what's different?

Vibe coding platforms and AI IDEs optimize for generating the first version quickly, but keep code as the source of truth. That makes iteration and collaboration fragile beyond the prototype. Applikai is built around a shared application model, real-time collaboration and ambient agents operating safely on intent, not on code diffs.

-- Copy and paste the text below into your favorite LLM --

Market

What market is Applikai addressing?

Applikai targets the emerging market of AI-native software development tools: products that help teams build and evolve applications with AI, not just generate a first version. We start with mobile, where state, offline behavior, deployment constraints, and multi-stakeholder collaboration make iteration hardest and the need most acute.

Why now?

  • AI coding tools have reached real scale (Lovable and Cursor reporting very large ARR run-rates).
  • Teams are shifting from one-off code generation to continuous, multi-actor development (humans + AI agents).
  • All the current tools still treat code as the source of truth, which works for prototypes but becomes brittle and unscalable as apps evolve.

 

Selected public signals:

  • Lovable: publicly communicated $100M ARR milestone in 8 months.
  • Cursor: publicly reported very large ARR run-rate and major funding rounds.
  • Application development & low-code markets continue to grow (CAGR 2024-2030 : 24% - Grand View Research)

TAM

Spend-based approach

TAM = global spend on tools used to build and evolve applications (AI-native development, app-building and iteration tooling). Mobile is a wedge, not a separate TAM line item.

 

TAM (2025/26): ~$40B-$70B in annual spend.

TAM grows to ~$150B-$270B over the next 5-8 years as low-code and AI-native development expand.

Anchor: Low-code development platforms alone are estimated at ~$37.4B in 2025 (Fortune Business Insights), before adding AI coding tools and adjacent app development spend. (https://www.fortunebusinessinsights.com/low-code-development-platform-market-102972)

 

Key TAM assumptions

  • Paid seat ARPU (license): $25-$75 / seat / month (range across dev tools & collaborative creative tools).
  • AI usage (tokens/credits): $10-$50 / active seat / month (usage varies widely; margins depend on model choice + caching + guardrails).
  • Buying units: solo, teams & agencies first; enterprise expands after (focus on collaboration-heavy, long-lived use cases).

SAM

SAM (initial wedge: mobile-first, collaboration-heavy teams and builders): ~$6B-$15B

This reflects capturing ~15–25% of the TAM initially (mobile + long-lived apps where iteration and coordination costs are highest), with expansion to web/backend/slides/animation workflows later.

SAM focuses on the subset of the market that matches Applikai’s initial product constraints and wedge: mobile-first teams /solo/ founders that iterate frequently and benefit from real-time collaboration + persistent AI agents.

SOM

The SOM is built on two distinct pricing models that map to two distinct segments:

  • individual/team licence,
  • agency per-project (plus an enterprise layer from Year 2).

Each has its own ARPU logic and growth driver.

Revenue model by segment

Model

Pricing

ARPU

+ Token ($30/u/mo)

Total ARPU / yr

Per seat (licence)

$49/seat/mo indiv

$39/seat/mo teams

~$540/seat

+$360/seat

~$900/seat

Base licence

+ per active project

$299/mo base (5 proj.) + $89/proj. · avg. 15 proj. → ~$1,185/mo

~$14,200/agency

+$10,800 (30 users × $30 × 12)

~$25,000/agency

Per seat, annual contract

$200/seat/mo · min 20 seats · SOC2/HIPAA

~$72K (30 seats)

+$10,800 (30 users × $30 × 12)

~$83K ACV

Segment

Solo / Founder

/ SME

Agency / Studio

Enterprise

SOM projection

 

Year 1

Year 2

Year 3

Year 4

ARR range

$5–12M

$48–94M

$213–384M

$657M–1.15B

Figma (ref)

~$10M

~$50M

~$150M

~$400M

Lovable (ref)

~$150M

~$400M

~$800M

~$1.5B+

Key assumptions:

  • agency channel live by Month 9: avg. 15 active projects × 2 active users per project = 30 token-billable users per agency
  • PM segment acquired via bottom-up team adoption (one PM → product team)
  • enterprise from Month 18.

 

Why PMs change the model:

they are team-budget buyers, not personal-card buyers.

A single PM seat converts to a team licence within 3 to 6 months in 50% of cases, making them the highest-leverage bottom-up acquisition vector after agency introductions.

Applikai is designed to onboard PMs without learning a new tool, new habits, and a new workflow.

Why is mobile the right bet versus web for Applikai?

Three structural reasons:

  • The gap is larger: Lovable, Bolt, and base44 are all web-first. No platform has achieved the mobile equivalent of Lovable's scale. Mobile development is 3-5x more complex than web and proportionally more painful to automate . Which is precisely why the opportunity is larger.
  • The unit economics are better: Mobile apps command higher prices, longer retention cycles, and more predictable compute usage than web apps. Better margins for the platform and higher willingness-to-pay from users.
  • The moat is deeper: App Store distribution, native performance (120 FPS, sub-100ms cold start), and built-in telemetry/A/B testing are defensible capabilities tied to Applikai's engine architecture. Web-first competitors cannot replicate them without an existential rewrite.

Applikai, lovable for mobile

How does Applikai differ from code-generation tools like Lovable, Bolt, or Cursor?

How does making Applikai an app engine help you build mobile applications?

How does having an app engine enable better apps for the end user?

Consistently high frame rates: We target up to 120 FPS on supported hardware and deliver smooth performance on all devices.

Fast cold starts: An app's typical compressed size (without assets) is a few hundred KB, with cold starts targeted under 100 ms.

Low resource footprint for lower battery consumption: C++ and Metal allow for highly optimized code.

Fluid, native-feeling interactions: Gestures, haptics, keyboard handling, safe areas, dynamic type, VoiceOver/TalkBack, reduced motion, ... all owned by the engine. No stutter.

How do you reduce App Store rejection risk?

Our app engine has compliant behaviors by default. For example: App Tracking Transparency (ATT) and telemetry.

We auto-generate privacy manifests, nutrition labels, required policies, and permission explanations from actual data flows.

Agents flag risky patterns (aggressive notifications, missing attribution) during building.

Compliance wizards guide users through edge cases such as apps for kids or apps handling financial data.

What can be soft-updated? When a full store push is mandatory?

Applikai is designed to remain compliant with Apple Store guidelines.

Wireframes, navigation, etc. are fed as data to the engine, not executable or interpreted code.

We choose to enforce a policy stricter than the standard of Apple reviews to prevent any issue. We monitor guideline updates to always be compliant.

We only allow text, images, colors, safe A/B variants of pre-approved elements, and changes that do not alter app behavior.

Other updates go through the classic app review process.

Competition

How easily can the competition copy Applikai?

In short, hard to replicate without major rewrites.

Code-based app builders cannot implement most of Applikai's features or do so as effectively.

Using Git makes agentic collaboration hard by default and impossible for non-technical people.

You cannot "just add AI" on top of an existing product with years of legacy.

For example, Figma could not integrate agentic features into Figma Design and had to ship a completely separate Figma Make.

Competitors will focus on their markets and existing customers, not start a full rewrite that will render all existing projects obsolete.

Why Figma will not compete with Applikai?

Core technical mismatch

  • Browser-based design canvas + .fig file format prevents native mobile runtime execution and packaging
  • No structured entity system or version control adapted for real-time multi-agent editing
  • Web-based code foundation eliminates instant mobile previews and over-the-air updates

 

Figma cannot "just add AI" into its product

  • 1-to-1 export to mobile
  • Figma Make is a different product from Figma Design: the data model of Figma Design was not built for agentic workflows
  • Figma Make and AI features output prototypes. Not a complete mobile app.
  • Entire product, plugins, templates, variables, and collaboration features are built around design handoff.
  • Existing customer files, team processes, and ecosystem (plugins, Dev Mode, FigJam integration) cannot be discarded

 

Strategic reality

  • Talent, culture, and internal processes are built around design tools, not working applications
  • ~$14.5B market cap and ~$1.37B revenue guidance for 2026 (post-IPO reset from 2025 peak) leave no room for a radical pivot to own the full app executable stack

 

Figma attempting to become Applikai would mean abandoning its core identity as the universal design tool, fragmenting its ecosystem of 10M+ designers, and diluting the value of every existing .fig file.

All to enter a market it has never operated in.

Why Lovable will not compete with Applikai?

Core technical mismatch

  • Git prevents real-time multi-user editing
  • Git blocks coordinated work across dozens of AI agents
  • Web-based code foundation eliminates instant mobile previews and over-the-air updates

 

Lovable's legacy inertia

  • 8 million+ users and paying customer projects lock the company into its current architecture
  • The mutable codebase plus every prompt template, diffing agent, and Git tool will stay in place
  • LLM orchestration for web codebases will not be replaced

 

Strategic reality

  • Talent, culture, and internal processes are built to ship web apps
  • Core engineering team has zero experience building production app engines for cross-platform mobile infrastructure
  • $6.6B valuation and $200M ARR (Series B 2025) eliminate any room for a full pivot

 

Lovable copying Applikai means scrapping their architecture, their GTM, and their valuation while rebuilding from zero in a domain they have no experience in.

Technical execution

What is required to have an app engine?

Layout: Vertical, Horizontal, Grid, ...

Components: Button, Email Input, ...

Logic: what to do when an element is clicked, ...

Navigation: the user flow

Animation: transition between screens, fade-ins, fade-outs, ...

Native & hardware features: notifications, camera, ...

Data Model: Users, Images, Messages

What is the technical stack used?

The core of the engine is in C++.

 

For the graphics:

  • WebGL in the browser,
  • Metal on iOS,
  • Vulkan on Android.

 

WebSockets for the real-time networking.

How are UI components implemented without the native SDKs or third-parties like Shadcn?

We implement a core set of primitives in-house, and expand coverage progressively.

We avoid implementing every component in isolation to prevent combinatorial complexity.

Instead of having each UI element as a separate widget, we use a unified component system built from layout, logic, display, and animation properties.

The logic of a component is a set of behaviors (clickable, scrollable, ...).

Every component carries all the properties needed to render any type of UI element.

Each component supports text content, background color, border thickness, ... even if not used.

Components are also composable: a list item will be built from other components.

This makes the component system flexible and powerful.

Additionally, using AI, we can quickly generate many components from a few dozen well-crafted examples.

How will Applikai support additional or custom components?

We chose not to support every possible case to avoid making Applikai overly complex.

Most apps use the same few components (Shadcn UI, for example, has ~60 standard components).

We will roll out components incrementally. There is no need to launch with 300 components.

We will roll out new components based on telemetry, user requests, and agent analysis.

We plan to have multiple agents proactively coding and suggesting new components.

For agencies, or big companies, an enterprise tier can include the development of custom components.

Our ability to roll out components incrementally demonstrates that we are a dedicated team that listens to the community.

What is the current state of the product?

Applikai is a deep-tech product. Not something that can be assembled from off-the-shelf components or generated with a prompt.

Our CTO Julien has been building the core engine solo: the real-time networking layer, the component system, and the agentic harness.

We are raising to move from validated architecture to shipped product: hiring the experienced engineers who will build on top of what Julien has laid down, and accelerating the roadmap with our first design partners.

What components are already implemented?

Vertical/Horizontal layout

Checkbox

Button

Double Slider

Icon

Image

Label

Loader

Progress Bar

Slider

Subtitle

Text

Text Input

Title

Wireframe (root component)

What layout options are already implemented?

Horizontal/vertical alignment (left/center/right + top/center/bottom).

Width and height in pixels, percent, hug, or fill.

Is making an animation system difficult?

In our case, it is straightforward.

An animation is simply an interpolation of a component's properties between two points.

In our component system, all components have all properties (position, size, color, ...).

An animation consists of a component ID, a property ID, a duration, and an easing function.

Why not use Lottie?

We can use the best Lottie player without issue in Applikai.

Lottie files can be easily supported.

Lottie does not handle basic use cases like screen transitions.

Lottie will be integrated after Applikai’s initial release.

How do you access native/hardware features like the camera?

Applikai has a bridge to hardware features.

On iOS, dedicated Swift code handles the camera and can be called from the app. Same for other features.

Most apps use the same limited set of native/hardware features (camera, gallery, permissions, contacts).

We will roll out new native and hardware features over time based on telemetry, user requests, and agent analysis.

We plan to have multiple agents proactively coding and suggesting new native/hardware features.

Most, if not all, low/no-code app builders struggle with complexity to replace code. How do you tackle this issue?

We chose not to support every possible case. Applikai would become a Rube Goldberg machine.

For almost all apps, the logic and operations performed are confined to a simple scope.

Mobile apps also operate on strict conventions that users expect. For example, the back arrow in the top-left corner.

We also divide the complexity of an app into separate, local problems.

The navigation (with screen transition) is a completely separate system from the wireframes for instance.

We use our infinite canvas to showcase information, like a navigation graph.

We believe that dedicated tooling can replace code in almost all cases.

Before AI, the problem was the clunky interface and the tedious hours spent configuring everything that could otherwise be done in 2 lines of code.

With AI, all of it disappears.

The AI translates plain English into whatever is needed.

How do you ensure functional specifications in plain English are precise enough?

Plain English is only an input to the agents.

Applikai's agents transform the plain English input into structured data.

Applikai uses semantic components (button, image, etc.) and structured English (e.g., ‘when [event] happens, do [action]’) for logic.

Applikai uses a navigation graph and other visual tooling.

The user speaks in English and receives structured, understandable, and auditable data.

How are built-in A/B testing and telemetry possible?

Applikai sees every tap, every interaction.

Applikai also knows all your screens, your navigation graphs and, and the A/B variants.

The app engine handles the ATT (App Tracking Transparency).

What prior deep-tech projects demonstrate the team’s ability to build Applikai?

Julien and Antoine were co-workers at Tempow. They shipped high-end, highly performant Android kernel Bluetooth drivers to 50M+ mobiles.

Antoine implemented the LC3 Bluetooth Audio Codec for Google embedded in billions of devices.

Julien has 20+ years coding in asm, C/C++, Java, python and HTML/CSS/TS.

Antoine has 30+ years coding in asm, C/C++, Kotlin, Flutter.

Agentic AI

What are the agentic workflows of Applikai?

We use an Orchestrator/Worker/Judge model.

Orchestrators produce a task graph, workers execute the tasks in the correct order and judges validate.

We run Claude-powered agents (or equivalent LLMs) in dedicated VMs, each with full access to project data.

How do you solve the multiple agent coordination issue?

With Git, the collaboration is very hard.

This is also true for humans without any agents.

Git is asynchronous. You get large edits that involve merging with a conflict resolution step.

We use a real-time model. No branch, no asynchronous editing.

Users and agents edit an app by submitting small deltas: Adding a button, changing a color, ...

A delta has a defined target and action.

This makes the agent-to-agent, agent-to-user, and user-to-user collaboration trivial compared to Git.

Adoption

Why would agencies adopt Applikai?

The real risk is not generic switching cost, it's workflow inertia.

Agencies have a Figma → handoff → dev pipeline that their clients have accepted, their developers know, and their project managers can estimate.

Disrupting that is not a tooling decision. It is a strategic one.

 

We do not win by asking to replace existing workflow.

We win by inserting into it at the point of highest pain.

The specific moment agencies hate most is the gap between design and the working app: where pixel-perfect Figma files become imperfect React Native implementations, where "it looks different on my phone," where A/B testing requires wiring up a third-party SDK.

That gap costs time, budget, and client trust on every single project.

 

Applikai eliminates that gap structurally, not just incrementally.

The design is the app. There is no handoff.

 

So the adoption conversation with an agency isn't "abandon your stack."

It is "what if your next greenfield project shipped 40% faster with zero design-to-dev translation loss?"

That's a one-project trial, not an existential commitment. Agencies run parallel stacks all the time.

 

Rocapine and Quiet are design partners precisely because they're high-volume studios.

They ship many apps, they feel this pain on every project, and they have the appetite to try a new tool without betting their entire business on it.

How do you overcome designers’ resistance to leaving Figma?

We are following the same strategy Figma used to displace Sketch.

We offer features that deliver a 10× improvement in the overall experience, including but not limited to:

  • Fully functional wireframes in desktop editor
  • Instant sync desktop editor / mobile device
  • Pixel-perfect apps by default
  • Multiple simultaneous AI agentic workflows

We ship updates regularly and work closely with our users.

We demonstrate a technically superior product.

 

We also have the exact same nomenclature (ex: fill, hug) and overall design editing experience.

A designer familiar with Figma will be able to instantly use Applikai.

How do you overcome developers’ resistance to leaving IDEs and Git?

First, a reframe: developers are not the primary decision-maker in Applikai's adoption path.

A founder, PM, or agency lead chooses Applikai.

The developer's job then shifts, and that shift is actually an upgrade.

 

In a traditional mobile stack, developers spend a disproportionate amount of time on work that has nothing to do with engineering: translating Figma files into code pixel by pixel, debugging layout inconsistencies across devices, wiring up analytics SDKs, managing App Store submissions, resolving merge conflicts from design changes.

These are not interesting problems. They are friction.

 

The developers who will resist Applikai are the ones whose identity is tied to the craft of writing Swift or Kotlin by hand.

That's a real group, and they are not our early adopters.

Our early adopters, who are developers, know the pain of shipping apps on mobile.

How do you mitigate vendor lock-in concerns for agencies and enterprises?

Lock-in fear comes from two distinct places:

The first is capability lock-in: "what if Applikai cannot build what my client needs in 6 months?"

Our answer is transparency: we publish our component roadmap publicly, we commit to specific release milestones with design partners, and enterprise contracts include a clause that allows custom component development at a fixed rate.

The second is data and portability lock-in: "what if we want to leave?"

We are evaluating open-sourcing the engine runtime as a concrete commitment to our customers.

If we disappear, your app keeps running and your data is readable without us.

 

The best lock-in protection is a product that keeps getting better faster than the cost of switching.

That's what we're building, and it's what our design partner relationships are designed to stress-test right now.

How do you overcome the “yet another standard” perception in a crowded AI app-builder market?

Most competitors output code and cannot deliver real-time device testing, soft updates, built-in telemetry & A/B, or Figma-style multi-agent collaboration.

Unique app-engine architecture creates a permanent moat; Lovable, Figma, and others cannot integrate these features without complete overhaul.

Design partners (including Rocapine and Quiet) and proven demand (Lovable’s scale) validate real production need from day one.

How do you ease users into Applikai without a steep learning curve?

We closely mirror existing interfaces throughout the product: designers have a familiar Figma-like interface with the same nomenclature.

Our agents interact with Applikai exactly like a user would. They select wireframes, open menus, change properties, etc. with a mouse cursor.

You can ask an agent to show you how to perform a task.

In addition, all tasks can be triggered with plain English to be processed by agents.

There is no need to know how to add a condition to a navigation edge: just tell the agents "When user is logged in, go to screen X, otherwise Y" while selecting the starting screen.

AI models

What happens to your moat if frontier AI models make code generation trivial and error-free?

We are designing for it: Better models accelerate us more than they threaten us.

Every capability improvement in code generation translates directly into better agents inside Applikai.

We are the orchestration layer and runtime, not a code generator.

The smarter the models get, the more valuable a structured, auditable, conflict-free environment becomes to run them in.

 

Code generation getting better does not make Applikai obsolete:

  • It doesn't give you live on-device sync.
  • It doesn't give you over-the-air updates.
  • It doesn't give you native 120 FPS rendering.
  • It doesn't eliminate merge conflicts when five agents are working in parallel.
  • It doesn't give you built-in A/B testing and telemetry without third-party integrations.

 

The analogy is Unreal Engine vs. AI-generated 3D assets.

Better AI tools generate better assets faster. Studios still need a game engine to run them.

Nobody argues that Nano Banana makes Unreal Engine obsolete.

Applikai is the engine. Models are the asset pipeline.

What happens to your moat if frontier AI can directly generate binaries?

Applikai minimizes as much as possible the use of code.

Applikai encodes the functional spec as structured English (e.g., ‘when [event] happens, do [action]’).

Applikai is designed with such models in mind.