
CEO, transentis labs GmbH
Building Metapad: An AI-First Engineering Story
February 2026
Here's something we don't often see admitted in public: we haven't written a single line of Metapad's code by hand.
Every component, every algorithm, every database migration, every WebSocket handler — all of it was generated by AI, guided by humans. And the result is a production application with real-time collaboration, visual metamodeling, knowledge graph capabilities, and an AI assistant that generates enterprise models from natural language.
This isn't a toy project or a demo. It's a professional tool used by enterprise architects and consultants. And we built it in a few weeks of calendar time.
Here's what we learned.
What We Actually Built
Let's be specific about the scope, because "built with AI" can mean anything from a landing page to a todo app:
- A full-stack Rust application — compiled to WebAssembly for the browser, with an Axum server backend
- Real-time collaborative editing — WebSocket-based synchronization where multiple users edit the same model simultaneously
- A visual canvas — SVG-based modeling surface with drag-and-drop, relationship drawing, multi-select, and direct manipulation
- A metamodeling engine — users define their own node types, relationship types, and constraints, then create instances that conform to the metamodel
- An AI assistant — Claude-powered model generation from natural language, respecting metamodel constraints
- Full internationalization — models can be translated into multiple languages with real-time language switching
- Import/export — JSON model serialization and SVG diagram export
- Team features — model sharing, permissions, presence indicators
This is not a simple CRUD app. It's an IDE — one of the most UI-intensive, interaction-heavy categories of software there is.
The Approach: Humans Architect, AI Implements
Our process looks nothing like traditional software development, and also nothing like the "just prompt it and ship it" caricature of AI-assisted coding.
We Provide the Vision
Every feature starts with a human decision about what to build and why. We write detailed feature specifications, architectural documents, and design decisions. We think carefully about user experience, data models, and system boundaries.
This is the work that matters most — and it's the work that AI can't do for you. No amount of code generation helps if you're building the wrong thing.
AI Implements the Design
With a clear specification in hand, we describe what we need and the AI generates the implementation. Not in one shot — in a collaborative conversation. We review the approach, suggest adjustments, and iterate until the implementation matches our intent.
The key insight: AI is extraordinarily good at translating a clear design into working code. The clearer the specification, the better the output. This creates a virtuous cycle — the discipline of writing precise specifications improves both the AI output and our own understanding.
We Test and Refine
Every feature goes through manual testing. We run the application, exercise the new functionality, and provide feedback. When something doesn't work or doesn't feel right, we describe the issue and the AI fixes it.
We also build automated tests — unit tests for business logic, integration tests for API endpoints, and end-to-end tests for critical user flows. The AI writes these too, but we define what needs to be tested and verify the coverage.
Iterate Rapidly
The speed of this cycle is what makes it transformative. A feature that would traditionally take days of implementation takes hours. Not because the AI types faster — but because the feedback loop is measured in minutes, not days.
Describe a feature. Review the implementation. Test it. Provide feedback. Get a fix. Test again. Ship it.
This loop runs many times per day. In a few weeks, we shipped what would traditionally be months of work.
What We Learned
Clarity is the Bottleneck
The speed of AI-assisted development is limited not by how fast code can be generated, but by how clearly you can describe what you want. Vague requirements produce vague implementations. Precise specifications produce precise code.
This has a profound implication: the most valuable engineering skill in an AI-first world is the ability to think clearly and communicate precisely. Architecture, systems thinking, and domain expertise matter more than ever.
AI Handles Complexity Well
We were genuinely surprised by how well AI handles complex, interconnected systems. Real-time collaboration with conflict-free operations, metamodel constraint validation, recursive tree rendering with drag-and-drop — these are hard problems that the AI implemented correctly with proper guidance.
The caveat: you need to understand the complexity yourself. We couldn't have guided the implementation of WebSocket synchronization without understanding distributed systems. The AI amplifies expertise — it doesn't replace it.
The Human Role Shifts, Not Shrinks
We spend our time on:
- Vision and strategy — deciding what to build and why
- Architecture and design — defining system boundaries, data models, interaction patterns
- Quality assurance — testing, reviewing, and providing feedback
- User experience — judging whether something feels right, not just whether it works
What we don't spend time on: typing code, looking up API docs, debugging syntax errors, writing boilerplate. These activities consumed the majority of traditional development time. Eliminating them doesn't make the human role smaller — it makes it more focused on the work that actually matters.
Documentation Becomes a First-Class Practice
In traditional development, documentation is often an afterthought. In AI-first development, it's essential infrastructure. Our architectural documents, feature specifications, and design decisions serve double duty: they guide the AI and they preserve institutional knowledge.
Every significant change is documented. Every design decision is recorded with its rationale. This isn't extra work — it's the mechanism by which we communicate with our AI collaborator.
Refactoring is Continuous, Not Optional
Here's something that surprises people: we refactor aggressively — also using AI, also guided by humans.
AI-generated code is not inherently messy. But like any codebase that evolves rapidly, it accumulates patterns that made sense in the moment but don't serve the architecture long-term. A component that started simple grows responsibilities. A data structure that worked for one use case needs restructuring for three.
We treat refactoring as a regular practice, not a future cleanup task. When we see duplication, we extract shared abstractions. When a module grows too large, we decompose it. When the architecture needs to evolve — say, moving from a single-file application to a proper workspace with separated concerns — we plan the restructuring, describe it precisely, and let AI execute it.
The result is a codebase with sound architecture: clear module boundaries, consistent patterns, well-separated concerns. Not because AI naturally produces clean code, but because we continuously invest in keeping it clean — using the same AI-assisted process we use for everything else.
This matters more than people realize. The speed advantage of AI-first development only compounds if the codebase stays healthy. A messy codebase slows down AI just as much as it slows down humans. Clean architecture is what allows us to keep moving fast week after week.
Why This Matters for Metapad
We're not sharing this story just because it's interesting (though we think it is). It directly connects to what we're building.
Metapad is an AI-powered tool for enterprise modeling. Our AI assistant generates models from natural language. Our vision is that AI should enhance human understanding, not replace human judgment.
We built Metapad the same way we expect our users to use it.
When an enterprise architect uses Metapad's AI to generate an organizational model, the process is the same as ours: the human provides the vision and domain knowledge, the AI handles the implementation, and the human reviews and refines the result.
We believe this is how AI tools should work — as amplifiers of human expertise, not as replacements for human thinking. And we believe this because we experience it every day in our own engineering process.
The Skeptic's Questions
We anticipate some objections:
"But can AI really handle production-quality code?"
Our application is in production, serving real users, with real-time collaboration and zero-downtime deployments. The code compiles, the tests pass, and the users are happy. Quality comes from the process — clear specifications, thorough testing, and rapid iteration — not from who (or what) types the characters.
"What about maintenance and debugging?"
We maintain and debug with the same approach. Describe the bug, let AI investigate and fix it, test the fix. The codebase is well-structured (because we specified clear architectural patterns) and well-documented (because documentation is central to the process).
"Doesn't this only work for simple apps?"
We specifically chose a complex, UI-intensive application to prove the approach. Real-time collaboration, visual canvas rendering, metamodeling with constraint validation, AI-powered generation — if it works here, it works anywhere.
"What happens when AI makes mistakes?"
It makes mistakes constantly. So do human developers. The question isn't whether mistakes happen — it's how quickly you catch and fix them. With rapid iteration cycles and thorough testing, mistakes are caught in minutes, not days.
Looking Forward
We believe AI-first engineering is not a trend — it's a permanent shift in how software is built. The teams that learn to work this way will have a structural speed advantage that compounds over time.
But it requires a shift in mindset. The value isn't in generating code faster — it's in spending more time on vision, architecture, and quality. The best AI-first teams won't be the ones with the fanciest prompts. They'll be the ones with the clearest thinking.
We're still early in this journey. Every week we learn something new about how to work effectively with AI. But the results speak for themselves: a complex, production-ready application built in weeks, not months, by a small team focused on what matters.
The future of software engineering isn't about writing code. It's about understanding what needs to be built — and building it with every tool at your disposal.
Metapad is our AI-powered IDE for Enterprise Digital Twins — built entirely with AI. Try it free and see what AI-first engineering produces.
About transentis
transentis labs GmbH builds tools for understanding and transforming complex systems. Metapad is our professional IDE for Enterprise Digital Twins. Learn more about our mission.