Mathieu
Julien
AI
Paul
Back to website
Applikai is a web-based collaborative workspace where humans and ambient AI agents build and evolve mobile applications together. Users work on an infinite canvas and collaborate in real time on a shared application model (intent), not on generated code.
Teams building mobile and internal/operational apps - product, design, engineering, ops, data, marketing, sales working together under high iteration pressure. Agencies and studios building apps for clients are one of our primary ICPs within this broader segment.
Users design screens, define flows, connect data and refine app behavior on a shared canvas. AI agents show up as collaborators (engineering, design, ops, data, marketing, sales, ..) and can proactively suggest or apply changes. The same workspace is used both to create a first version and to iterate on an existing product, including viewing key usage/churn signals and turning them into product changes.
Ambient AI means agents stay present, maintain context and understand the current application state over time. They can act proactively when needed, but only through constrained, validated operations on the shared model - not free-form code edits.
Code is a great execution format, but a poor collaboration format at scale. A shared intent model enables semantic diffs ("flow updated", "screen added"), safer global changes and real-time collaboration that remains coherent as the app evolves.
Most AI dev tools generate or edit source code on demand. That works for prototypes, but becomes brittle once an app needs to evolve across iterations with multiple humans and AI touching the same codebase. Coordination becomes manual work, and the cost of change grows fast.
Under the hood, every app is represented as a structured model of intent: screens, components, flows, state, rules, data bindings and constraints. We sometimes call this our DSL (domain-specific language), but users do not write it. The model is the system of record that humans and AI agents edit through validated operations.
Mobile concentrates the hardest constraints: state, offline behavior, performance, permissions, store submission and compliance. Code-first and on-demand generation approaches break quickly here after the first prototype. If continuous collaboration works for mobile, it generalizes to other app categories.
Our goal is to produce production-grade mobile apps with access to real mobile capabilities. The exact runtime approach is a product decision (native, cross-platform, or hybrid), but the core differentiation is the shared application model and collaboration layer that drives the app.
A focused set of mobile app primitives: common screens, navigation patterns, forms, lists, authentication flows, camera, GPS, background tasks and basic data interactions - optimized for internal/operational apps and long-lived iteration cycles.
We will progressively expand coverage. Early on:
Collaboration is built on model operations (not file merges). Users and agents edit the same structured model, with real-time sync, permissions, history and conflict resolution designed for collaborative editing.
Agents propose and execute validated operations on the model (for example: "add screen", "update flow", "change rule", "add tracking event"). A validation layer enforces invariants, and the system can support review/rollback workflows.
Code and runtime artifacts can be generated or adapted from the model, but the model remains the source of truth. This is what keeps iteration safe and collaboration scalable.
We design for mobile constraints. We target smooth UI on supported devices and fast startup, with performance depending on the app's complexity, assets and device capabilities. We optimize the engine, caching and updates as the product matures.
We follow platform rules. Some changes can be shipped as content/model updates; other changes require a store release. We design the system to keep compliance clear and avoid grey areas.
We minimize sensitive data in the core platform and design for least-privilege access. Enterprise-grade security (SSO, audit logs, SOC2) is on the roadmap as we move from design partners to broader deployments.
The natural motion is bottom-up: one person starts a project, then invites teammates and stakeholders to collaborate on the same canvas. Agencies can also adopt by building client projects and scaling usage across accounts.
SaaS pricing with a base license (seats/teams) plus usage-based AI (tokens/credits) for heavy agent usage. We expect expansion as teams invite more collaborators and adopt more agent workflows.
Vibe coding platforms and AI IDEs optimize for generating the first version quickly, but keep code as the source of truth. That makes iteration and collaboration fragile beyond the prototype. Applikai is built around a shared application model, real-time collaboration and ambient agents operating safely on intent, not on code diffs.
Applikai targets the emerging market of AI-native software development tools: products that help teams build and evolve applications with AI, not just generate a first version. We start with mobile, where state, offline behavior, deployment constraints, and multi-stakeholder collaboration make iteration hardest and the need most acute.
Selected public signals:
Spend-based approach
TAM = global spend on tools used to build and evolve applications (AI-native development, app-building and iteration tooling). Mobile is a wedge, not a separate TAM line item.
TAM (2025/26): ~$40B-$70B in annual spend.
TAM grows to ~$150B-$270B over the next 5-8 years as low-code and AI-native development expand.
Anchor: Low-code development platforms alone are estimated at ~$37.4B in 2025 (Fortune Business Insights), before adding AI coding tools and adjacent app development spend. (https://www.fortunebusinessinsights.com/low-code-development-platform-market-102972)
Key TAM assumptions
SAM (initial wedge: mobile-first, collaboration-heavy teams and builders): ~$6B-$15B
This reflects capturing ~15–25% of the TAM initially (mobile + long-lived apps where iteration and coordination costs are highest), with expansion to web/backend/slides/animation workflows later.
SAM focuses on the subset of the market that matches Applikai’s initial product constraints and wedge: mobile-first teams /solo/ founders that iterate frequently and benefit from real-time collaboration + persistent AI agents.
The SOM is built on two distinct pricing models that map to two distinct segments:
Each has its own ARPU logic and growth driver.
Revenue model by segment
Model
Pricing
ARPU
+ Token ($30/u/mo)
Total ARPU / yr
Per seat (licence)
$49/seat/mo indiv
$39/seat/mo teams
~$540/seat
+$360/seat
~$900/seat
Base licence
+ per active project
$299/mo base (5 proj.) + $89/proj. · avg. 15 proj. → ~$1,185/mo
~$14,200/agency
+$10,800 (30 users × $30 × 12)
~$25,000/agency
Per seat, annual contract
$200/seat/mo · min 20 seats · SOC2/HIPAA
~$72K (30 seats)
+$10,800 (30 users × $30 × 12)
~$83K ACV
Segment
Solo / Founder
/ SME
Agency / Studio
Enterprise
SOM projection
Year 1
Year 2
Year 3
Year 4
ARR range
$5–12M
$48–94M
$213–384M
$657M–1.15B
Figma (ref)
~$10M
~$50M
~$150M
~$400M
Lovable (ref)
~$150M
~$400M
~$800M
~$1.5B+
Key assumptions:
Why PMs change the model:
they are team-budget buyers, not personal-card buyers.
A single PM seat converts to a team licence within 3 to 6 months in 50% of cases, making them the highest-leverage bottom-up acquisition vector after agency introductions.
Applikai is designed to onboard PMs without learning a new tool, new habits, and a new workflow.
Three structural reasons:
Our app engine has compliant behaviors by default. For example: App Tracking Transparency (ATT) and telemetry.
We auto-generate privacy manifests, nutrition labels, required policies, and permission explanations from actual data flows.
Agents flag risky patterns (aggressive notifications, missing attribution) during building.
Compliance wizards guide users through edge cases such as apps for kids or apps handling financial data.
Consistently high frame rates: We target up to 120 FPS on supported hardware and deliver smooth performance on all devices.
Fast cold starts: An app's typical compressed size (without assets) is a few hundred KB, with cold starts targeted under 100 ms.
Low resource footprint for lower battery consumption: C++ and Metal allow for highly optimized code.
Fluid, native-feeling interactions: Gestures, haptics, keyboard handling, safe areas, dynamic type, VoiceOver/TalkBack, reduced motion, ... all owned by the engine. No stutter.
Applikai is in a grey area of the Apple Store guidelines.
Wireframes, navigation, etc. are fed as data to the engine, not executable or interpreted code.
We choose to enforce a policy stricter than to prevent any issue with Apple review.
We only allow text, images, colors, safe A/B variants of pre-approved elements, and changes that do not alter app behavior.
Other updates go through the classic app review process.
In short, hard to replicate without major rewrites.
Code-based app builders cannot implement most of Applikai's features or do so as effectively.
Using Git makes agentic collaboration hard by default and impossible for non-technical people.
You cannot "just add AI" on top of an existing product with years of legacy.
For example, Figma could not integrate agentic features into Figma Design and had to ship a completely separate Figma Make.
Competitors will focus on their markets and existing customers, not start a full rewrite that will render all existing projects obsolete.
Core technical mismatch
Figma cannot "just add AI" into its product
Strategic reality
Figma attempting to become Applikai would mean abandoning its core identity as the universal design tool, fragmenting its ecosystem of 10M+ designers, and diluting the value of every existing .fig file.
All to enter a market it has never operated in.
Core technical mismatch
Lovable's legacy inertia
Strategic reality
Lovable copying Applikai means scrapping their architecture, their GTM, and their valuation while rebuilding from zero in a domain they have no experience in.
Layout: Vertical, Horizontal, Grid, ...
Components: Button, Email Input, ...
Logic: what to do when an element is clicked, ...
Navigation: the user flow
Animation: transition between screens, fade-ins, fade-outs, ...
Native & hardware features: notifications, camera, ...
Data Model: Users, Images, Messages
The core of the engine is in C++.
For the graphics:
WebSockets for the real-time networking.
We implement a core set of primitives in-house, and expand coverage progressively.
We avoid implementing every component in isolation to prevent combinatorial complexity.
Instead of having each UI element as a separate widget, we use a unified component system built from layout, logic, display, and animation properties.
The logic of a component is a set of behaviors (clickable, scrollable, ...).
Every component carries all the properties needed to render any type of UI element.
Each component supports text content, background color, border thickness, ... even if not used.
Components are also composable: a list item will be built from other components.
This makes the component system flexible and powerful.
Additionally, using AI, we can quickly generate many components from a few dozen well-crafted examples.
We chose not to support every possible case to avoid making Applikai overly complex.
Most apps use the same few components (Shadcn UI, for example, has ~60 standard components).
We will roll out components incrementally. There is no need to launch with 300 components.
We will roll out new components based on telemetry, user requests, and agent analysis.
We plan to have multiple agents proactively coding and suggesting new components.
For agencies, or big companies, an enterprise tier can include the development of custom components.
Our ability to roll out components incrementally demonstrates that we are a dedicated team that listens to the community.
Applikai is a deep-tech product. Not something that can be assembled from off-the-shelf components or generated with a prompt.
Our CTO Julien has been building the core engine solo: the real-time networking layer, the component system, and the agentic harness.
We are raising to move from validated architecture to shipped product: hiring the experienced engineers who will build on top of what Julien has laid down, and accelerating the roadmap with our first design partners.
Vertical/Horizontal layout
Checkbox
Button
Double Slider
Icon
Image
Label
Loader
Progress Bar
Slider
Subtitle
Text
Text Input
Title
Wireframe (root component)
Horizontal/vertical alignment (left/center/right + top/center/bottom).
Width and height in pixels, percent, hug, or fill.
In our case, it is straightforward.
An animation is simply an interpolation of a component's properties between two points.
In our component system, all components have all properties (position, size, color, ...).
An animation consists of a component ID, a property ID, a duration, and an easing function.
We can use the best Lottie player without issue in Applikai.
Lottie files can be easily supported.
Lottie does not handle basic use cases like screen transitions.
Lottie will be integrated after Applikai’s initial release.
Applikai has a bridge to hardware features.
On iOS, dedicated Swift code handles the camera and can be called from the app. Same for other features.
Most apps use the same limited set of native/hardware features (camera, gallery, permissions, contacts).
We will roll out new native and hardware features over time based on telemetry, user requests, and agent analysis.
We plan to have multiple agents proactively coding and suggesting new native/hardware features.
We chose not to support every possible case. Applikai would become a Rube Goldberg machine.
For almost all apps, the logic and operations performed are confined to a simple scope.
Mobile apps also operate on strict conventions that users expect. For example, the back arrow in the top-left corner.
We also divide the complexity of an app into separate, local problems.
The navigation (with screen transition) is a completely separate system from the wireframes for instance.
We use our infinite canvas to showcase information, like a navigation graph.
We believe that dedicated tooling can replace code in almost all cases.
Before AI, the problem was the clunky interface and the tedious hours spent configuring everything that could otherwise be done in 2 lines of code.
With AI, all of it disappears.
The AI translates plain English into whatever is needed.
Plain English is only an input to the agents.
Applikai's agents transform the plain English input into structured data.
Applikai uses semantic components (button, image, etc.) and structured English (e.g., ‘when [event] happens, do [action]’) for logic.
Applikai uses a navigation graph and other visual tooling.
The user speaks in English and receives structured, understandable, and auditable data.
Applikai sees every tap, every interaction.
Applikai also knows all your screens, your navigation graphs and, and the A/B variants.
The app engine handles the ATT (App Tracking Transparency).
Julien and Antoine were co-workers at Tempow. They shipped high-end, highly performant Android kernel Bluetooth drivers to 50M+ mobiles.
Antoine implemented the LC3 Bluetooth Audio Codec for Google embbeded
We use an Orchestrator/Worker/Judge model.
Orchestrators produce a task graph, workers execute the tasks in the correct order and judges validate.
We run Claude-powered agents (or equivalent LLMs) in dedicated VMs, each with full access to project data.
With Git, the collaboration is very hard.
This is also true for humans without any agents.
Git is asynchronous. You get large edits that involve merging with a conflict resolution step.
We use a real-time model. No branch, no asynchronous editing.
Users and agents edit an app by submitting small deltas: Adding a button, changing a color, ...
A delta has a defined target and action.
This makes the agent-to-agent, agent-to-user, and user-to-user collaboration trivial compared to Git.
The real risk is not generic switching cost, it's workflow inertia.
Agencies have a Figma → handoff → dev pipeline that their clients have accepted, their developers know, and their project managers can estimate.
Disrupting that is not a tooling decision. It is a strategic one.
We do not win by asking to replace existing workflow.
We win by inserting into it at the point of highest pain.
The specific moment agencies hate most is the gap between design and the working app: where pixel-perfect Figma files become imperfect React Native implementations, where "it looks different on my phone," where A/B testing requires wiring up a third-party SDK.
That gap costs time, budget, and client trust on every single project.
Applikai eliminates that gap structurally, not just incrementally.
The design is the app. There is no handoff.
So the adoption conversation with an agency isn't "abandon your stack."
It is "what if your next greenfield project shipped 40% faster with zero design-to-dev translation loss?"
That's a one-project trial, not an existential commitment. Agencies run parallel stacks all the time.
Rocapine and Quiet are design partners precisely because they're high-volume studios.
They ship many apps, they feel this pain on every project, and they have the appetite to try a new tool without betting their entire business on it.
We are following the same strategy Figma used to displace Sketch.
We offer features that deliver a 10× improvement in the overall experience, including but not limited to:
We ship updates regularly and work closely with our users.
We demonstrate a technically superior product.
We also have the same monenclature (ex: fill, hug)
First, a reframe: developers are not the primary decision-maker in Applikai's adoption path.
A founder, PM, or agency lead chooses Applikai.
The developer's job then shifts, and that shift is actually an upgrade.
In a traditional mobile stack, developers spend a disproportionate amount of time on work that has nothing to do with engineering: translating Figma files into code pixel by pixel, debugging layout inconsistencies across devices, wiring up analytics SDKs, managing App Store submissions, resolving merge conflicts from design changes.
These are not interesting problems. They are friction.
The developers who will resist Applikai are the ones whose identity is tied to the craft of writing Swift or Kotlin by hand.
That's a real group, and they are not our early adopters.
Our early adopters, who are developers, know the pain of shipping apps on mobile.
Lock-in fear comes from two distinct places:
The first is capability lock-in: "what if Applikai cannot build what my client needs in 6 months?"
Our answer is transparency: we publish our component roadmap publicly, we commit to specific release milestones with design partners, and enterprise contracts include a clause that allows custom component development at a fixed rate.
The second is data and portability lock-in: "what if we want to leave?"
We are evaluating open-sourcing the engine runtime as a concrete commitment to our customers.
If we disappear, your app keeps running and your data is readable without us.
The best lock-in protection is a product that keeps getting better faster than the cost of switching.
That's what we're building, and it's what our design partner relationships are designed to stress-test right now.
Most competitors output code and cannot deliver real-time device testing, soft updates, built-in telemetry & A/B, or Figma-style multi-agent collaboration.
Unique app-engine architecture creates a permanent moat; Lovable, Figma, and others cannot integrate these features without complete overhaul.
Design partners (including Rocapine and Quiet) and proven demand (Lovable’s scale) validate real production need from day one.
We closely mirror existing interfaces throughout the product: designers have a familiar Figma-like interface with the same nomenclature.
Our agents interact with Applikai exactly like a user would. They select wireframes, open menus, change properties, etc. with a mouse cursor.
You can ask an agent to show you how to perform a task.
In addition, all tasks can be triggered with plain English to be processed by agents.
There is no need to know how to add a condition to a navigation edge: just tell the agents "When user is logged in, go to screen X, otherwise Y" while selecting the starting screen.
We are designing for it: Better models accelerate us more than they threaten us.
Every capability improvement in code generation translates directly into better agents inside Applikai.
We are the orchestration layer and runtime, not a code generator.
The smarter the models get, the more valuable a structured, auditable, conflict-free environment becomes to run them in.
Code generation getting better does not make Applikai obsolete:
The analogy is Unreal Engine vs. AI-generated 3D assets.
Better AI tools generate better assets faster. Studios still need a game engine to run them.
Nobody argues that Nano Banana makes Unreal Engine obsolete.
Applikai is the engine. Models are the asset pipeline.
Applikai minimizes as much as possible the use of code.
Applikai encodes the functional spec as structured English (e.g., ‘when [event] happens, do [action]’).
Applikai is designed with such models in mind.
Mathieu
Julien
AI
Paul
Back to website
Applikai is a web-based collaborative workspace where humans and ambient AI agents build and evolve mobile applications together. Users work on an infinite canvas and collaborate in real time on a shared application model (intent), not on generated code.
Teams building mobile and internal/operational apps - product, design, engineering, ops, data, marketing, sales working together under high iteration pressure. Agencies and studios building apps for clients are one of our primary ICPs within this broader segment.
Users design screens, define flows, connect data and refine app behavior on a shared canvas. AI agents show up as collaborators (engineering, design, ops, data, marketing, sales, ..) and can proactively suggest or apply changes. The same workspace is used both to create a first version and to iterate on an existing product, including viewing key usage/churn signals and turning them into product changes.
Ambient AI means agents stay present, maintain context and understand the current application state over time. They can act proactively when needed, but only through constrained, validated operations on the shared model - not free-form code edits.
Code is a great execution format, but a poor collaboration format at scale. A shared intent model enables semantic diffs ("flow updated", "screen added"), safer global changes and real-time collaboration that remains coherent as the app evolves.
Most AI dev tools generate or edit source code on demand. That works for prototypes, but becomes brittle once an app needs to evolve across iterations with multiple humans and AI touching the same codebase. Coordination becomes manual work, and the cost of change grows fast.
Under the hood, every app is represented as a structured model of intent: screens, components, flows, state, rules, data bindings and constraints. We sometimes call this our DSL (domain-specific language), but users do not write it. The model is the system of record that humans and AI agents edit through validated operations.
Mobile concentrates the hardest constraints: state, offline behavior, performance, permissions, store submission and compliance. Code-first and on-demand generation approaches break quickly here after the first prototype. If continuous collaboration works for mobile, it generalizes to other app categories.
Our goal is to produce production-grade mobile apps with access to real mobile capabilities. The exact runtime approach is a product decision (native, cross-platform, or hybrid), but the core differentiation is the shared application model and collaboration layer that drives the app.
A focused set of mobile app primitives: common screens, navigation patterns, forms, lists, authentication flows, camera, GPS, background tasks and basic data interactions - optimized for internal/operational apps and long-lived iteration cycles.
We will progressively expand coverage. Early on:
Collaboration is built on model operations (not file merges). Users and agents edit the same structured model, with real-time sync, permissions, history and conflict resolution designed for collaborative editing.
Agents propose and execute validated operations on the model (for example: "add screen", "update flow", "change rule", "add tracking event"). A validation layer enforces invariants, and the system can support review/rollback workflows.
Code and runtime artifacts can be generated or adapted from the model, but the model remains the source of truth. This is what keeps iteration safe and collaboration scalable.
We design for mobile constraints. We target smooth UI on supported devices and fast startup, with performance depending on the app's complexity, assets and device capabilities. We optimize the engine, caching and updates as the product matures.
We follow platform rules. Some changes can be shipped as content/model updates; other changes require a store release. We design the system to keep compliance clear and avoid grey areas.
We minimize sensitive data in the core platform and design for least-privilege access. Enterprise-grade security (SSO, audit logs, SOC2) is on the roadmap as we move from design partners to broader deployments.
The natural motion is bottom-up: one person starts a project, then invites teammates and stakeholders to collaborate on the same canvas. Agencies can also adopt by building client projects and scaling usage across accounts.
SaaS pricing with a base license (seats/teams) plus usage-based AI (tokens/credits) for heavy agent usage. We expect expansion as teams invite more collaborators and adopt more agent workflows.
Vibe coding platforms and AI IDEs optimize for generating the first version quickly, but keep code as the source of truth. That makes iteration and collaboration fragile beyond the prototype. Applikai is built around a shared application model, real-time collaboration and ambient agents operating safely on intent, not on code diffs.
Applikai targets the emerging market of AI-native software development tools: products that help teams build and evolve applications with AI, not just generate a first version. We start with mobile, where state, offline behavior, deployment constraints, and multi-stakeholder collaboration make iteration hardest and the need most acute.
Selected public signals:
Spend-based approach
TAM = global spend on tools used to build and evolve applications (AI-native development, app-building and iteration tooling). Mobile is a wedge, not a separate TAM line item.
TAM (2025/26): ~$40B-$70B in annual spend.
TAM grows to ~$150B-$270B over the next 5-8 years as low-code and AI-native development expand.
Anchor: Low-code development platforms alone are estimated at ~$37.4B in 2025 (Fortune Business Insights), before adding AI coding tools and adjacent app development spend. (https://www.fortunebusinessinsights.com/low-code-development-platform-market-102972)
Key TAM assumptions
SAM (initial wedge: mobile-first, collaboration-heavy teams and builders): ~$6B-$15B
This reflects capturing ~15–25% of the TAM initially (mobile + long-lived apps where iteration and coordination costs are highest), with expansion to web/backend/slides/animation workflows later.
SAM focuses on the subset of the market that matches Applikai’s initial product constraints and wedge: mobile-first teams /solo/ founders that iterate frequently and benefit from real-time collaboration + persistent AI agents.
The SOM is built on two distinct pricing models that map to two distinct segments:
Each has its own ARPU logic and growth driver.
Revenue model by segment
Model
Pricing
ARPU
+ Token ($30/u/mo)
Total ARPU / yr
Per seat (licence)
$49/seat/mo indiv
$39/seat/mo teams
~$540/seat
+$360/seat
~$900/seat
Base licence
+ per active project
$299/mo base (5 proj.) + $89/proj. · avg. 15 proj. → ~$1,185/mo
~$14,200/agency
+$10,800 (30 users × $30 × 12)
~$25,000/agency
Per seat, annual contract
$200/seat/mo · min 20 seats · SOC2/HIPAA
~$72K (30 seats)
+$10,800 (30 users × $30 × 12)
~$83K ACV
Segment
Solo / Founder
/ SME
Agency / Studio
Enterprise
SOM projection
Year 1
Year 2
Year 3
Year 4
ARR range
$5–12M
$48–94M
$213–384M
$657M–1.15B
Figma (ref)
~$10M
~$50M
~$150M
~$400M
Lovable (ref)
~$150M
~$400M
~$800M
~$1.5B+
Key assumptions:
Why PMs change the model:
they are team-budget buyers, not personal-card buyers.
A single PM seat converts to a team licence within 3 to 6 months in 50% of cases, making them the highest-leverage bottom-up acquisition vector after agency introductions.
Applikai is designed to onboard PMs without learning a new tool, new habits, and a new workflow.
Three structural reasons:
Our app engine has compliant behaviors by default. For example: App Tracking Transparency (ATT) and telemetry.
We auto-generate privacy manifests, nutrition labels, required policies, and permission explanations from actual data flows.
Agents flag risky patterns (aggressive notifications, missing attribution) during building.
Compliance wizards guide users through edge cases such as apps for kids or apps handling financial data.
Consistently high frame rates: We target up to 120 FPS on supported hardware and deliver smooth performance on all devices.
Fast cold starts: An app's typical compressed size (without assets) is a few hundred KB, with cold starts targeted under 100 ms.
Low resource footprint for lower battery consumption: C++ and Metal allow for highly optimized code.
Fluid, native-feeling interactions: Gestures, haptics, keyboard handling, safe areas, dynamic type, VoiceOver/TalkBack, reduced motion, ... all owned by the engine. No stutter.
Applikai is in a grey area of the Apple Store guidelines.
Wireframes, navigation, etc. are fed as data to the engine, not executable or interpreted code.
We choose to enforce a policy stricter than to prevent any issue with Apple review.
We only allow text, images, colors, safe A/B variants of pre-approved elements, and changes that do not alter app behavior.
Other updates go through the classic app review process.
In short, hard to replicate without major rewrites.
Code-based app builders cannot implement most of Applikai's features or do so as effectively.
Using Git makes agentic collaboration hard by default and impossible for non-technical people.
You cannot "just add AI" on top of an existing product with years of legacy.
For example, Figma could not integrate agentic features into Figma Design and had to ship a completely separate Figma Make.
Competitors will focus on their markets and existing customers, not start a full rewrite that will render all existing projects obsolete.
Core technical mismatch
Figma cannot "just add AI" into its product
Strategic reality
Figma attempting to become Applikai would mean abandoning its core identity as the universal design tool, fragmenting its ecosystem of 10M+ designers, and diluting the value of every existing .fig file.
All to enter a market it has never operated in.
Core technical mismatch
Lovable's legacy inertia
Strategic reality
Lovable copying Applikai means scrapping their architecture, their GTM, and their valuation while rebuilding from zero in a domain they have no experience in.
Layout: Vertical, Horizontal, Grid, ...
Components: Button, Email Input, ...
Logic: what to do when an element is clicked, ...
Navigation: the user flow
Animation: transition between screens, fade-ins, fade-outs, ...
Native & hardware features: notifications, camera, ...
Data Model: Users, Images, Messages
The core of the engine is in C++.
For the graphics:
WebSockets for the real-time networking.
We implement a core set of primitives in-house, and expand coverage progressively.
We avoid implementing every component in isolation to prevent combinatorial complexity.
Instead of having each UI element as a separate widget, we use a unified component system built from layout, logic, display, and animation properties.
The logic of a component is a set of behaviors (clickable, scrollable, ...).
Every component carries all the properties needed to render any type of UI element.
Each component supports text content, background color, border thickness, ... even if not used.
Components are also composable: a list item will be built from other components.
This makes the component system flexible and powerful.
Additionally, using AI, we can quickly generate many components from a few dozen well-crafted examples.
We chose not to support every possible case to avoid making Applikai overly complex.
Most apps use the same few components (Shadcn UI, for example, has ~60 standard components).
We will roll out components incrementally. There is no need to launch with 300 components.
We will roll out new components based on telemetry, user requests, and agent analysis.
We plan to have multiple agents proactively coding and suggesting new components.
For agencies, or big companies, an enterprise tier can include the development of custom components.
Our ability to roll out components incrementally demonstrates that we are a dedicated team that listens to the community.
Applikai is a deep-tech product. Not something that can be assembled from off-the-shelf components or generated with a prompt.
Our CTO Julien has been building the core engine solo: the real-time networking layer, the component system, and the agentic harness.
We are raising to move from validated architecture to shipped product: hiring the experienced engineers who will build on top of what Julien has laid down, and accelerating the roadmap with our first design partners.
Vertical/Horizontal layout
Checkbox
Button
Double Slider
Icon
Image
Label
Loader
Progress Bar
Slider
Subtitle
Text
Text Input
Title
Wireframe (root component)
Horizontal/vertical alignment (left/center/right + top/center/bottom).
Width and height in pixels, percent, hug, or fill.
In our case, it is straightforward.
An animation is simply an interpolation of a component's properties between two points.
In our component system, all components have all properties (position, size, color, ...).
An animation consists of a component ID, a property ID, a duration, and an easing function.
We can use the best Lottie player without issue in Applikai.
Lottie files can be easily supported.
Lottie does not handle basic use cases like screen transitions.
Lottie will be integrated after Applikai’s initial release.
Applikai has a bridge to hardware features.
On iOS, dedicated Swift code handles the camera and can be called from the app. Same for other features.
Most apps use the same limited set of native/hardware features (camera, gallery, permissions, contacts).
We will roll out new native and hardware features over time based on telemetry, user requests, and agent analysis.
We plan to have multiple agents proactively coding and suggesting new native/hardware features.
We chose not to support every possible case. Applikai would become a Rube Goldberg machine.
For almost all apps, the logic and operations performed are confined to a simple scope.
Mobile apps also operate on strict conventions that users expect. For example, the back arrow in the top-left corner.
We also divide the complexity of an app into separate, local problems.
The navigation (with screen transition) is a completely separate system from the wireframes for instance.
We use our infinite canvas to showcase information, like a navigation graph.
We believe that dedicated tooling can replace code in almost all cases.
Before AI, the problem was the clunky interface and the tedious hours spent configuring everything that could otherwise be done in 2 lines of code.
With AI, all of it disappears.
The AI translates plain English into whatever is needed.
Plain English is only an input to the agents.
Applikai's agents transform the plain English input into structured data.
Applikai uses semantic components (button, image, etc.) and structured English (e.g., ‘when [event] happens, do [action]’) for logic.
Applikai uses a navigation graph and other visual tooling.
The user speaks in English and receives structured, understandable, and auditable data.
Applikai sees every tap, every interaction.
Applikai also knows all your screens, your navigation graphs and, and the A/B variants.
The app engine handles the ATT (App Tracking Transparency).
Julien and Antoine were co-workers at Tempow. They shipped high-end, highly performant Android kernel Bluetooth drivers to 50M+ mobiles.
Antoine implemented the LC3 Bluetooth Audio Codec for Google embbeded
We use an Orchestrator/Worker/Judge model.
Orchestrators produce a task graph, workers execute the tasks in the correct order and judges validate.
We run Claude-powered agents (or equivalent LLMs) in dedicated VMs, each with full access to project data.
With Git, the collaboration is very hard.
This is also true for humans without any agents.
Git is asynchronous. You get large edits that involve merging with a conflict resolution step.
We use a real-time model. No branch, no asynchronous editing.
Users and agents edit an app by submitting small deltas: Adding a button, changing a color, ...
A delta has a defined target and action.
This makes the agent-to-agent, agent-to-user, and user-to-user collaboration trivial compared to Git.
The real risk is not generic switching cost, it's workflow inertia.
Agencies have a Figma → handoff → dev pipeline that their clients have accepted, their developers know, and their project managers can estimate.
Disrupting that is not a tooling decision. It is a strategic one.
We do not win by asking to replace existing workflow.
We win by inserting into it at the point of highest pain.
The specific moment agencies hate most is the gap between design and the working app: where pixel-perfect Figma files become imperfect React Native implementations, where "it looks different on my phone," where A/B testing requires wiring up a third-party SDK.
That gap costs time, budget, and client trust on every single project.
Applikai eliminates that gap structurally, not just incrementally.
The design is the app. There is no handoff.
So the adoption conversation with an agency isn't "abandon your stack."
It is "what if your next greenfield project shipped 40% faster with zero design-to-dev translation loss?"
That's a one-project trial, not an existential commitment. Agencies run parallel stacks all the time.
Rocapine and Quiet are design partners precisely because they're high-volume studios.
They ship many apps, they feel this pain on every project, and they have the appetite to try a new tool without betting their entire business on it.
We are following the same strategy Figma used to displace Sketch.
We offer features that deliver a 10× improvement in the overall experience, including but not limited to:
We ship updates regularly and work closely with our users.
We demonstrate a technically superior product.
We also have the same monenclature (ex: fill, hug)
First, a reframe: developers are not the primary decision-maker in Applikai's adoption path.
A founder, PM, or agency lead chooses Applikai.
The developer's job then shifts, and that shift is actually an upgrade.
In a traditional mobile stack, developers spend a disproportionate amount of time on work that has nothing to do with engineering: translating Figma files into code pixel by pixel, debugging layout inconsistencies across devices, wiring up analytics SDKs, managing App Store submissions, resolving merge conflicts from design changes.
These are not interesting problems. They are friction.
The developers who will resist Applikai are the ones whose identity is tied to the craft of writing Swift or Kotlin by hand.
That's a real group, and they are not our early adopters.
Our early adopters, who are developers, know the pain of shipping apps on mobile.
Lock-in fear comes from two distinct places:
The first is capability lock-in: "what if Applikai cannot build what my client needs in 6 months?"
Our answer is transparency: we publish our component roadmap publicly, we commit to specific release milestones with design partners, and enterprise contracts include a clause that allows custom component development at a fixed rate.
The second is data and portability lock-in: "what if we want to leave?"
We are evaluating open-sourcing the engine runtime as a concrete commitment to our customers.
If we disappear, your app keeps running and your data is readable without us.
The best lock-in protection is a product that keeps getting better faster than the cost of switching.
That's what we're building, and it's what our design partner relationships are designed to stress-test right now.
Most competitors output code and cannot deliver real-time device testing, soft updates, built-in telemetry & A/B, or Figma-style multi-agent collaboration.
Unique app-engine architecture creates a permanent moat; Lovable, Figma, and others cannot integrate these features without complete overhaul.
Design partners (including Rocapine and Quiet) and proven demand (Lovable’s scale) validate real production need from day one.
We closely mirror existing interfaces throughout the product: designers have a familiar Figma-like interface with the same nomenclature.
Our agents interact with Applikai exactly like a user would. They select wireframes, open menus, change properties, etc. with a mouse cursor.
You can ask an agent to show you how to perform a task.
In addition, all tasks can be triggered with plain English to be processed by agents.
There is no need to know how to add a condition to a navigation edge: just tell the agents "When user is logged in, go to screen X, otherwise Y" while selecting the starting screen.
We are designing for it: Better models accelerate us more than they threaten us.
Every capability improvement in code generation translates directly into better agents inside Applikai.
We are the orchestration layer and runtime, not a code generator.
The smarter the models get, the more valuable a structured, auditable, conflict-free environment becomes to run them in.
Code generation getting better does not make Applikai obsolete:
The analogy is Unreal Engine vs. AI-generated 3D assets.
Better AI tools generate better assets faster. Studios still need a game engine to run them.
Nobody argues that Nano Banana makes Unreal Engine obsolete.
Applikai is the engine. Models are the asset pipeline.
Applikai minimizes as much as possible the use of code.
Applikai encodes the functional spec as structured English (e.g., ‘when [event] happens, do [action]’).
Applikai is designed with such models in mind.
Mathieu
Julien
AI
Paul
Back to website
Applikai is a web-based collaborative workspace where humans and ambient AI agents build and evolve mobile applications together. Users work on an infinite canvas and collaborate in real time on a shared application model (intent), not on generated code.
Teams building mobile and internal/operational apps - product, design, engineering, ops, data, marketing, sales working together under high iteration pressure. Agencies and studios building apps for clients are one of our primary ICPs within this broader segment.
Users design screens, define flows, connect data and refine app behavior on a shared canvas. AI agents show up as collaborators (engineering, design, ops, data, marketing, sales, ..) and can proactively suggest or apply changes. The same workspace is used both to create a first version and to iterate on an existing product, including viewing key usage/churn signals and turning them into product changes.
Under the hood, every app is represented as a structured model of intent: screens, components, flows, state, rules, data bindings and constraints. We sometimes call this our DSL (domain-specific language), but users do not write it. The model is the system of record that humans and AI agents edit through validated operations.
Most AI dev tools generate or edit source code on demand. That works for prototypes, but becomes brittle once an app needs to evolve across iterations with multiple humans and AI touching the same codebase. Coordination becomes manual work, and the cost of change grows fast.
Code is a great execution format, but a poor collaboration format at scale. A shared intent model enables semantic diffs ("flow updated", "screen added"), safer global changes and real-time collaboration that remains coherent as the app evolves.
Ambient AI means agents stay present, maintain context and understand the current application state over time. They can act proactively when needed, but only through constrained, validated operations on the shared model - not free-form code edits.
Mobile concentrates the hardest constraints: state, offline behavior, performance, permissions, store submission and compliance. Code-first and on-demand generation approaches break quickly here after the first prototype. If continuous collaboration works for mobile, it generalizes to other app categories.
Our goal is to produce production-grade mobile apps with access to real mobile capabilities. The exact runtime approach is a product decision (native, cross-platform, or hybrid), but the core differentiation is the shared application model and collaboration layer that drives the app.
A focused set of mobile app primitives: common screens, navigation patterns, forms, lists, authentication flows, camera, GPS, background tasks and basic data interactions - optimized for internal/operational apps and long-lived iteration cycles.
We will progressively expand coverage. Early on:
Collaboration is built on model operations (not file merges). Users and agents edit the same structured model, with real-time sync, permissions, history and conflict resolution designed for collaborative editing.
Agents propose and execute validated operations on the model (for example: "add screen", "update flow", "change rule", "add tracking event"). A validation layer enforces invariants, and the system can support review/rollback workflows.
Code and runtime artifacts can be generated or adapted from the model, but the model remains the source of truth. This is what keeps iteration safe and collaboration scalable.
We design for mobile constraints. We target smooth UI on supported devices and fast startup, with performance depending on the app's complexity, assets and device capabilities. We optimize the engine, caching and updates as the product matures.
We follow platform rules. Some changes can be shipped as content/model updates; other changes require a store release. We design the system to keep compliance clear and avoid grey areas.
We minimize sensitive data in the core platform and design for least-privilege access. Enterprise-grade security (SSO, audit logs, SOC2) is on the roadmap as we move from design partners to broader deployments.
The natural motion is bottom-up: one person starts a project, then invites teammates and stakeholders to collaborate on the same canvas. Agencies can also adopt by building client projects and scaling usage across accounts.
SaaS pricing with a base license (seats/teams) plus usage-based AI (tokens/credits) for heavy agent usage. We expect expansion as teams invite more collaborators and adopt more agent workflows.
Vibe coding platforms and AI IDEs optimize for generating the first version quickly, but keep code as the source of truth. That makes iteration and collaboration fragile beyond the prototype. Applikai is built around a shared application model, real-time collaboration and ambient agents operating safely on intent, not on code diffs.
Applikai targets the emerging market of AI-native software development tools: products that help teams build and evolve applications with AI, not just generate a first version. We start with mobile, where state, offline behavior, deployment constraints, and multi-stakeholder collaboration make iteration hardest and the need most acute.
Selected public signals:
Spend-based approach
TAM = global spend on tools used to build and evolve applications (AI-native development, app-building and iteration tooling). Mobile is a wedge, not a separate TAM line item.
TAM (2025/26): ~$40B-$70B in annual spend.
TAM grows to ~$150B-$270B over the next 5-8 years as low-code and AI-native development expand.
Anchor: Low-code development platforms alone are estimated at ~$37.4B in 2025 (Fortune Business Insights), before adding AI coding tools and adjacent app development spend. (https://www.fortunebusinessinsights.com/low-code-development-platform-market-102972)
Key TAM assumptions
SAM (initial wedge: mobile-first, collaboration-heavy teams and builders): ~$6B-$15B
This reflects capturing ~15–25% of the TAM initially (mobile + long-lived apps where iteration and coordination costs are highest), with expansion to web/backend/slides/animation workflows later.
SAM focuses on the subset of the market that matches Applikai’s initial product constraints and wedge: mobile-first teams /solo/ founders that iterate frequently and benefit from real-time collaboration + persistent AI agents.
The SOM is built on two distinct pricing models that map to two distinct segments:
Each has its own ARPU logic and growth driver.
Revenue model by segment
Model
Pricing
ARPU
+ Token ($30/u/mo)
Total ARPU / yr
Per seat (licence)
$49/seat/mo indiv
$39/seat/mo teams
~$540/seat
+$360/seat
~$900/seat
Base licence
+ per active project
$299/mo base (5 proj.) + $89/proj. · avg. 15 proj. → ~$1,185/mo
~$14,200/agency
+$10,800 (30 users × $30 × 12)
~$25,000/agency
Per seat, annual contract
$200/seat/mo · min 20 seats · SOC2/HIPAA
~$72K (30 seats)
+$10,800 (30 users × $30 × 12)
~$83K ACV
Segment
Solo / Founder
/ SME
Agency / Studio
Enterprise
SOM projection
Year 1
Year 2
Year 3
Year 4
ARR range
$5–12M
$48–94M
$213–384M
$657M–1.15B
Figma (ref)
~$10M
~$50M
~$150M
~$400M
Lovable (ref)
~$150M
~$400M
~$800M
~$1.5B+
Key assumptions:
Why PMs change the model:
they are team-budget buyers, not personal-card buyers.
A single PM seat converts to a team licence within 3 to 6 months in 50% of cases, making them the highest-leverage bottom-up acquisition vector after agency introductions.
Applikai is designed to onboard PMs without learning a new tool, new habits, and a new workflow.
Three structural reasons:
Consistently high frame rates: We target up to 120 FPS on supported hardware and deliver smooth performance on all devices.
Fast cold starts: An app's typical compressed size (without assets) is a few hundred KB, with cold starts targeted under 100 ms.
Low resource footprint for lower battery consumption: C++ and Metal allow for highly optimized code.
Fluid, native-feeling interactions: Gestures, haptics, keyboard handling, safe areas, dynamic type, VoiceOver/TalkBack, reduced motion, ... all owned by the engine. No stutter.
Our app engine has compliant behaviors by default. For example: App Tracking Transparency (ATT) and telemetry.
We auto-generate privacy manifests, nutrition labels, required policies, and permission explanations from actual data flows.
Agents flag risky patterns (aggressive notifications, missing attribution) during building.
Compliance wizards guide users through edge cases such as apps for kids or apps handling financial data.
Applikai is designed to remain compliant with Apple Store guidelines.
Wireframes, navigation, etc. are fed as data to the engine, not executable or interpreted code.
We choose to enforce a policy stricter than the standard of Apple reviews to prevent any issue. We monitor guideline updates to always be compliant.
We only allow text, images, colors, safe A/B variants of pre-approved elements, and changes that do not alter app behavior.
Other updates go through the classic app review process.
In short, hard to replicate without major rewrites.
Code-based app builders cannot implement most of Applikai's features or do so as effectively.
Using Git makes agentic collaboration hard by default and impossible for non-technical people.
You cannot "just add AI" on top of an existing product with years of legacy.
For example, Figma could not integrate agentic features into Figma Design and had to ship a completely separate Figma Make.
Competitors will focus on their markets and existing customers, not start a full rewrite that will render all existing projects obsolete.
Core technical mismatch
Figma cannot "just add AI" into its product
Strategic reality
Figma attempting to become Applikai would mean abandoning its core identity as the universal design tool, fragmenting its ecosystem of 10M+ designers, and diluting the value of every existing .fig file.
All to enter a market it has never operated in.
Core technical mismatch
Lovable's legacy inertia
Strategic reality
Lovable copying Applikai means scrapping their architecture, their GTM, and their valuation while rebuilding from zero in a domain they have no experience in.
Layout: Vertical, Horizontal, Grid, ...
Components: Button, Email Input, ...
Logic: what to do when an element is clicked, ...
Navigation: the user flow
Animation: transition between screens, fade-ins, fade-outs, ...
Native & hardware features: notifications, camera, ...
Data Model: Users, Images, Messages
The core of the engine is in C++.
For the graphics:
WebSockets for the real-time networking.
We implement a core set of primitives in-house, and expand coverage progressively.
We avoid implementing every component in isolation to prevent combinatorial complexity.
Instead of having each UI element as a separate widget, we use a unified component system built from layout, logic, display, and animation properties.
The logic of a component is a set of behaviors (clickable, scrollable, ...).
Every component carries all the properties needed to render any type of UI element.
Each component supports text content, background color, border thickness, ... even if not used.
Components are also composable: a list item will be built from other components.
This makes the component system flexible and powerful.
Additionally, using AI, we can quickly generate many components from a few dozen well-crafted examples.
We chose not to support every possible case to avoid making Applikai overly complex.
Most apps use the same few components (Shadcn UI, for example, has ~60 standard components).
We will roll out components incrementally. There is no need to launch with 300 components.
We will roll out new components based on telemetry, user requests, and agent analysis.
We plan to have multiple agents proactively coding and suggesting new components.
For agencies, or big companies, an enterprise tier can include the development of custom components.
Our ability to roll out components incrementally demonstrates that we are a dedicated team that listens to the community.
Applikai is a deep-tech product. Not something that can be assembled from off-the-shelf components or generated with a prompt.
Our CTO Julien has been building the core engine solo: the real-time networking layer, the component system, and the agentic harness.
We are raising to move from validated architecture to shipped product: hiring the experienced engineers who will build on top of what Julien has laid down, and accelerating the roadmap with our first design partners.
Vertical/Horizontal layout
Checkbox
Button
Double Slider
Icon
Image
Label
Loader
Progress Bar
Slider
Subtitle
Text
Text Input
Title
Wireframe (root component)
Horizontal/vertical alignment (left/center/right + top/center/bottom).
Width and height in pixels, percent, hug, or fill.
In our case, it is straightforward.
An animation is simply an interpolation of a component's properties between two points.
In our component system, all components have all properties (position, size, color, ...).
An animation consists of a component ID, a property ID, a duration, and an easing function.
We can use the best Lottie player without issue in Applikai.
Lottie files can be easily supported.
Lottie does not handle basic use cases like screen transitions.
Lottie will be integrated after Applikai’s initial release.
Applikai has a bridge to hardware features.
On iOS, dedicated Swift code handles the camera and can be called from the app. Same for other features.
Most apps use the same limited set of native/hardware features (camera, gallery, permissions, contacts).
We will roll out new native and hardware features over time based on telemetry, user requests, and agent analysis.
We plan to have multiple agents proactively coding and suggesting new native/hardware features.
We chose not to support every possible case. Applikai would become a Rube Goldberg machine.
For almost all apps, the logic and operations performed are confined to a simple scope.
Mobile apps also operate on strict conventions that users expect. For example, the back arrow in the top-left corner.
We also divide the complexity of an app into separate, local problems.
The navigation (with screen transition) is a completely separate system from the wireframes for instance.
We use our infinite canvas to showcase information, like a navigation graph.
We believe that dedicated tooling can replace code in almost all cases.
Before AI, the problem was the clunky interface and the tedious hours spent configuring everything that could otherwise be done in 2 lines of code.
With AI, all of it disappears.
The AI translates plain English into whatever is needed.
Plain English is only an input to the agents.
Applikai's agents transform the plain English input into structured data.
Applikai uses semantic components (button, image, etc.) and structured English (e.g., ‘when [event] happens, do [action]’) for logic.
Applikai uses a navigation graph and other visual tooling.
The user speaks in English and receives structured, understandable, and auditable data.
Applikai sees every tap, every interaction.
Applikai also knows all your screens, your navigation graphs and, and the A/B variants.
The app engine handles the ATT (App Tracking Transparency).
Julien and Antoine were co-workers at Tempow. They shipped high-end, highly performant Android kernel Bluetooth drivers to 50M+ mobiles.
Antoine implemented the LC3 Bluetooth Audio Codec for Google embedded in billions of devices.
Julien has 20+ years coding in asm, C/C++, Java, python and HTML/CSS/TS.
Antoine has 30+ years coding in asm, C/C++, Kotlin, Flutter.
We use an Orchestrator/Worker/Judge model.
Orchestrators produce a task graph, workers execute the tasks in the correct order and judges validate.
We run Claude-powered agents (or equivalent LLMs) in dedicated VMs, each with full access to project data.
With Git, the collaboration is very hard.
This is also true for humans without any agents.
Git is asynchronous. You get large edits that involve merging with a conflict resolution step.
We use a real-time model. No branch, no asynchronous editing.
Users and agents edit an app by submitting small deltas: Adding a button, changing a color, ...
A delta has a defined target and action.
This makes the agent-to-agent, agent-to-user, and user-to-user collaboration trivial compared to Git.
The real risk is not generic switching cost, it's workflow inertia.
Agencies have a Figma → handoff → dev pipeline that their clients have accepted, their developers know, and their project managers can estimate.
Disrupting that is not a tooling decision. It is a strategic one.
We do not win by asking to replace existing workflow.
We win by inserting into it at the point of highest pain.
The specific moment agencies hate most is the gap between design and the working app: where pixel-perfect Figma files become imperfect React Native implementations, where "it looks different on my phone," where A/B testing requires wiring up a third-party SDK.
That gap costs time, budget, and client trust on every single project.
Applikai eliminates that gap structurally, not just incrementally.
The design is the app. There is no handoff.
So the adoption conversation with an agency isn't "abandon your stack."
It is "what if your next greenfield project shipped 40% faster with zero design-to-dev translation loss?"
That's a one-project trial, not an existential commitment. Agencies run parallel stacks all the time.
Rocapine and Quiet are design partners precisely because they're high-volume studios.
They ship many apps, they feel this pain on every project, and they have the appetite to try a new tool without betting their entire business on it.
We are following the same strategy Figma used to displace Sketch.
We offer features that deliver a 10× improvement in the overall experience, including but not limited to:
We ship updates regularly and work closely with our users.
We demonstrate a technically superior product.
We also have the exact same nomenclature (ex: fill, hug) and overall design editing experience.
A designer familiar with Figma will be able to instantly use Applikai.
First, a reframe: developers are not the primary decision-maker in Applikai's adoption path.
A founder, PM, or agency lead chooses Applikai.
The developer's job then shifts, and that shift is actually an upgrade.
In a traditional mobile stack, developers spend a disproportionate amount of time on work that has nothing to do with engineering: translating Figma files into code pixel by pixel, debugging layout inconsistencies across devices, wiring up analytics SDKs, managing App Store submissions, resolving merge conflicts from design changes.
These are not interesting problems. They are friction.
The developers who will resist Applikai are the ones whose identity is tied to the craft of writing Swift or Kotlin by hand.
That's a real group, and they are not our early adopters.
Our early adopters, who are developers, know the pain of shipping apps on mobile.
Lock-in fear comes from two distinct places:
The first is capability lock-in: "what if Applikai cannot build what my client needs in 6 months?"
Our answer is transparency: we publish our component roadmap publicly, we commit to specific release milestones with design partners, and enterprise contracts include a clause that allows custom component development at a fixed rate.
The second is data and portability lock-in: "what if we want to leave?"
We are evaluating open-sourcing the engine runtime as a concrete commitment to our customers.
If we disappear, your app keeps running and your data is readable without us.
The best lock-in protection is a product that keeps getting better faster than the cost of switching.
That's what we're building, and it's what our design partner relationships are designed to stress-test right now.
Most competitors output code and cannot deliver real-time device testing, soft updates, built-in telemetry & A/B, or Figma-style multi-agent collaboration.
Unique app-engine architecture creates a permanent moat; Lovable, Figma, and others cannot integrate these features without complete overhaul.
Design partners (including Rocapine and Quiet) and proven demand (Lovable’s scale) validate real production need from day one.
We closely mirror existing interfaces throughout the product: designers have a familiar Figma-like interface with the same nomenclature.
Our agents interact with Applikai exactly like a user would. They select wireframes, open menus, change properties, etc. with a mouse cursor.
You can ask an agent to show you how to perform a task.
In addition, all tasks can be triggered with plain English to be processed by agents.
There is no need to know how to add a condition to a navigation edge: just tell the agents "When user is logged in, go to screen X, otherwise Y" while selecting the starting screen.
We are designing for it: Better models accelerate us more than they threaten us.
Every capability improvement in code generation translates directly into better agents inside Applikai.
We are the orchestration layer and runtime, not a code generator.
The smarter the models get, the more valuable a structured, auditable, conflict-free environment becomes to run them in.
Code generation getting better does not make Applikai obsolete:
The analogy is Unreal Engine vs. AI-generated 3D assets.
Better AI tools generate better assets faster. Studios still need a game engine to run them.
Nobody argues that Nano Banana makes Unreal Engine obsolete.
Applikai is the engine. Models are the asset pipeline.
Applikai minimizes as much as possible the use of code.
Applikai encodes the functional spec as structured English (e.g., ‘when [event] happens, do [action]’).
Applikai is designed with such models in mind.