
AI First Principles Manifesto
Designing Organizational Artificial General Intelligence
Introduction
The discourse surrounding artificial intelligence is dominated by a grand and misleading narrative of replacement. We are told AI is coming for jobs, that human obsolescence is a matter of when, not if. This is a failure of imagination. The true, immediate potential of AI is not replacing people, but replacing the soul-crushing bureaucracy that makes their work miserable. It offers a path to dismantle the layers of process, approvals, and communication overhead that drain human potential.
Yet, this opportunity is being squandered. A staggering percentage of AI projects stall before deployment, with studies showing that over 80% fail to move from the lab to production.[1] The reasons are rarely technical. The failure is human. Organizations are implementing AI backward, attempting to automate inefficiency by bolting a revolutionary capability onto broken operational models. They are strapping a jet engine to a horse-drawn carriage. The result is not a faster carriage; it is a catastrophic mess, a predictable wave of costly implementations that fail to deliver value.
This paper introduces the AI First Principles, a framework for reversing this approach. It is not another methodology, but a collection of foundational Values and operational Core Tenets for rethinking work in an organization where AI is a native collaborator. These principles are grounded in decades of research from disparate fields: the systems thinking of Peter Senge,[2] the human-centered design philosophy of Don Norman,[3] the practical operationalization of AI detailed in Age of Invisible Machines,[4] the operational logic of Eli Goldratt's Theory of Constraints,[5] and the innovation theory of Clayton Christensen.[6]
The framework is divided into two parts. The Values establish the philosophical foundation, answering, "What do we believe?" The Core Tenets provide the operational logic, answering, "How do we act on those beliefs?" Together, they offer a coherent, human-centered approach for any organization ready to stop automating the past and start building the future of work.
Beyond These Principles
The AI First Principles are intentionally focused on the timeless truths of human-AI collaboration. They are not a comprehensive guide to every facet of implementation. These principles operate on the assumption that specialized domains like regulatory frameworks, legal liability, and specific technology choices are already being handled by their respective experts. For example, the NIST AI Risk Management Framework provides a robust, voluntary structure for identifying, measuring, and monitoring risks, serving as an excellent operational counterpart to the values outlined here.[7] These principles are designed to be layered upon this foundation of responsible governance, not to replace it. They are for the builders, designers, and strategists tasked with creating value within those established boundaries, ensuring that our most powerful tools are fused with human ingenuity to eliminate dysfunction and unlock potential.
Values
People Own Objectives
Every objective needs a human owner to ensure people remain accountable for outcomes. When AI causes harm, the human owner is accountable, not the algorithm. Name the Owner.
The Hidden Problem The arrival of sophisticated AI has introduced a dangerous new entity into our accountability frameworks: the "black box." When an AI system produces a harmful, biased, or simply incorrect outcome, the first human instinct is often to point a finger at the algorithm itself. This creates a tempting and perilous illusion of diffused responsibility, where a non-sentient tool is treated as a culpable actor. This phenomenon, often termed "automation bias," describes our tendency to over-trust automated systems and abdicate our own judgment, a cognitive shortcut that becomes exponentially more dangerous as those systems grow in complexity.[8] The hidden problem is not that AI will make mistakes; it is that organizations will create structures that allow those mistakes to become nobody's fault. This accountability vacuum allows for systemic risks to grow unchecked, as no single individual feels the ownership necessary to question, audit, or challenge the machine's output. The system becomes an orphan, and its failures become organizational acts of God.
The Cost of Ignoring It When no one person owns the objective, the AI system becomes a scapegoat for organizational failure. A biased lending algorithm that denies loans to qualified applicants is not a "coding error"; it is a failure of the human who owns the objective of fair lending. A supply chain model that collapses under unexpected disruption is not a "model drift" problem; it is a failure of the human who owns the objective of resilient logistics. Ignoring this principle leads to a corrosive culture of finger-pointing and risk aversion. Instead of building robust systems, teams build elaborate justifications. Meetings are held not to solve problems, but to document that one is not to blame for them. This behavior is a direct echo of what Fred Brooks identified in The Mythical Man-Month: the diffusion of responsibility across a team leads to a state where no single person can be held accountable, and the project itself suffers.[9] In the age of AI, this cost is magnified. A single algorithmic error can propagate across millions of decisions in an instant, making the absence of clear ownership an existential threat.
The Contrarian Insight The conventional approach to AI safety focuses heavily on technical solutions: explainable AI (XAI), model auditing, and bias detection tools. While essential, these are insufficient. The contrarian insight of this value is that the root of AI safety is not technical, but organizational. Accountability is a human construct, and it cannot be delegated to silicon. The most powerful tool for ensuring a safe and effective AI system is a single human name assigned to its objective. This is not about blaming an individual; it is about empowering one. The designated owner is not the person who writes the code or configures the model. They are the person who is accountable for the business outcome the AI is intended to achieve. This shifts the focus from "Is the model accurate?" to "Is the objective being met safely and effectively?" This single point of ownership creates the moral and professional gravity necessary to force hard questions and demand true understanding, transforming accountability from a distributed, abstract concept into a concrete, human responsibility.
Solution Framework Implementing this value requires a simple but radical act: for every AI system deployed, an executive "owner" must be publicly named. This individual is accountable for the system's outcomes, good or bad. This framework forces a cascade of positive behaviors. The owner, now personally on the hook, is intensely motivated to understand the system's limitations. They are compelled to ask "how could this go wrong?" and to demand safeguards that go beyond technical minimums. They become the chief skeptic, challenging the enthusiastic claims of the development team. This approach mirrors the principles of Total Quality Management (TQM), where responsibility for quality is not relegated to a downstream inspection department but is owned by those who manage the process itself.[10] Naming the owner makes accountability a proactive, design-time consideration, not a reactive, post-mortem exercise. It forces the organization to move beyond plausible deniability and embrace genuine responsibility.
Philosophical Foundation This principle is deeply rooted in the philosophy of human-computer interaction and organizational psychology. Decades of research have shown that technology is not a neutral actor; it is a medium that reflects and amplifies the values of its creators.[11] This connects to the foundational concept of the "sociotechnical system," first identified by researchers at the Tavistock Institute, which posits that organizational outcomes are the product of inseparable interactions between social structures and technical tools.[12] A system without a human owner is a technical object detached from the social contract of responsibility. Furthermore, this value aligns with the core tenets of Servant Leadership, where the leader's primary role is to ensure the team is functioning effectively and ethically toward a clear goal.[13] The AI owner is a servant leader for the human-AI system, responsible for its well-being and its impact on others. Finally, the act of naming an owner is a direct application of the "Agile Manifesto's" emphasis on "individuals and interactions over processes and tools."[14] It correctly identifies that the ultimate arbiter of a system's success is not the tool itself, but the human judgment guiding it.
Implementation Reality In practice, naming a single owner will face immediate and fierce resistance. It cuts against the grain of corporate cultures that favor committee-based decisions and shared responsibility precisely because those structures diffuse risk and protect individuals from blame. The first pushback will be an attempt to name a committee or a department as the owner; this must be rejected. A committee has no single throat to choke. The second point of resistance will come from potential owners who fear becoming a scapegoat for complex system failures. This fear is valid and must be addressed by framing the role as one of empowerment, not blame. The owner must be given the ultimate authority to question, halt, or modify the AI's operation to meet their objective. They are not just responsible; they are in charge. Overcoming this resistance requires unambiguous executive sponsorship and a clear, organization-wide understanding that the goal is not to punish failure, but to create the conditions for
Individuals Come First
Prioritize human autonomy, safety, and well-being above efficiency, profit, or convenience. AI amplifies values, biases, and the capacity for manipulation. Build systems that preserve human agency above all else.
The Hidden Problem The dominant logic of technology development is optimization. We are conditioned to seek efficiency, to minimize friction, and to maximize engagement or output. When applied to AI, this logic carries a hidden and dangerous payload. An AI designed to maximize profit for an insurance company may learn that denying claims to the most vulnerable customers is the most effective strategy. An AI designed to maximize employee productivity may conclude that constant surveillance and algorithmically generated pressure are the best tools. The hidden problem is that in the pursuit of a quantifiable metric, AI systems can become powerful engines for dehumanization. They lack the moral compass to understand that not all efficiencies are good and not all friction is bad. The friction of a thoughtful pause or a compassionate exception is often where humanity resides. By prioritizing abstract metrics over human well-being, we risk building systems that are exquisitely effective at achieving the wrong goals.
The Cost of Ignoring It When human agency is subordinated to system goals, the result is a landscape of digital puppetry. We see this in dark patterns, where user interfaces are intentionally designed to trick people into making choices they would not otherwise make, a direct assault on autonomy.[15] We see it in gig economy platforms where algorithmic managers dictate every action, leaving workers with a profound sense of powerlessness and alienation. The cost is not just a loss of trust; it is a measurable decline in mental and emotional well-being. This aligns with research on psychological safety, which demonstrates that environments where individuals feel controlled or threatened suffer from a catastrophic drop in innovation, engagement, and honest feedback.[16] An organization that ignores this value will inevitably create AI systems that, while perhaps profitable in the short term, generate a long-term trust deficit with its customers and a culture of fear among its employees. They will build systems that are technically efficient but morally and operationally bankrupt.
The Contrarian Insight The conventional wisdom is that a trade-off exists between human-centric design and business objectives like profit or efficiency. The contrarian insight is that this is a false dichotomy. In the long run, systems that undermine human autonomy are brittle and unsustainable. They generate resentment, workarounds, and eventual abandonment. The most robust and profitable systems are those that empower users, not manipulate them. This principle asserts that human well-being is not a constraint on design, but the ultimate objective of it. A system that makes a user feel respected, capable, and in control is a system that will earn their loyalty and engagement. This shifts the design question from "How can we get the user to do what we want?" to "How can we help the user achieve their goal with dignity and clarity?" This is the core philosophy of human-centered design, which argues that the most successful products emerge from a deep and empathetic understanding of the user's needs and context, not just the organization's.[3]
Solution Framework To put individuals first, organizations must embed the concept of "dignity" as a primary design constraint. This means that during the design and review process, teams must be able to answer a critical question: "Does this system treat the human interacting with it as an end, or as a means to an end?" This framework requires the creation of "dignity metrics" that sit alongside performance metrics. For example, does the system offer clear and easy ways to opt-out or appeal a decision? Does it provide transparency about how it works, or does it operate as an unexplainable black box? Does it create pressure or provide support? This approach involves explicitly mapping the emotional and psychological journey of the user, not just their transactional path. It requires adopting techniques like "value stream mapping" not just for process efficiency, but for human experience, identifying and eliminating steps that cause frustration, confusion, or a sense of powerlessness.[17]
Philosophical Foundation This value stands on the shoulders of centuries of philosophical thought regarding human autonomy, most famously articulated by Immanuel Kant's categorical imperative to always treat humanity as an end, and never merely as a means.[18] In the age of AI, this is no longer an abstract philosophical exercise; it is a concrete design principle for preventing instrumentalization, where a person is reduced to a mere tool for the system's goal. Technology that uses deception or manipulation to achieve its ends explicitly violates this imperative. This principle also draws heavily from the field of positive computing, which focuses on designing technology to support human well-being and flourishing, arguing that our tools should not just be functional but should contribute positively to our psychological state.[19] It asserts that good technology should enhance human capability and experience, not diminish it for the sake of an optimized metric. By placing the individual first, we commit to building AI that serves human values, rather than asking humans to serve the values of the machine.
Implementation Reality The primary resistance to this principle will be the perceived cost and the pressure of metrics. Building in transparency, offering recourse, and prioritizing user control can seem more expensive and less "efficient" than creating a closed, optimized system designed to maximize a single KPI like conversion or engagement. The argument will be made that "users don't care" or "it will hurt our numbers." This resistance is best overcome with evidence, not argument. Start with small, contained projects that demonstrate the long-term value of a trust-based relationship. Show how empowered users become more loyal customers and how respected employees are more innovative. This requires a willingness to measure success in years, not quarters, and to believe that the most sustainable competitive advantage is the trust of the people you serve. It is a commitment to building systems that we would, ourselves, be willing to be subject to.
Build From User Experience
Design insight comes from living with the daily friction that analysis misses. People who navigate these daily realities understand what breaks and why. The people wrestling with system failures are the ones most qualified to design system futures.
The Hidden Problem There is a fundamental disconnect in how most systems are designed. The people who conceive of the system (executives and architects) are institutionally insulated from the consequences of its daily use. They see the system through flowcharts, spreadsheets, and high-level dashboards—clean, abstract representations that bear little resemblance to the messy reality of its operation. The people who actually use the system—the customer service agent, the factory floor worker, the end customer—experience it as a series of frustrating workarounds, confusing interfaces, and illogical dead ends. The hidden problem is that organizations systematically devalue the most crucial source of design insight: the lived experience of failure. This experiential knowledge, rich with context and nuance, is often dismissed as "anecdotal" in favor of aggregated quantitative data that masks the very friction points that signal deep design flaws.
The Cost of Ignoring It When you design from the top down, you build elegant solutions to the wrong problems. This is the path to failed products, wasted investment, and user resentment. An organization might spend millions on a new enterprise resource planning (ERP) system designed to "streamline" operations, only to find that employees have developed a complex shadow system of spreadsheets because the official tool is unusable for their actual tasks. This is a classic example of what Peter Senge calls "compensating feedback," where a well-intentioned intervention is defeated by the system's own response to it.[2] The cost is not just the wasted money; it is the erosion of trust between leadership and the front lines. Employees become cynical, believing that their real-world expertise is ignored. Customers become frustrated, feeling that the company does not understand or care about their experience. This creates a cycle where bad systems are endlessly patched but never fundamentally fixed.
The Contrarian Insight The contrarian insight, borrowed from the world of human-centered design, is that the person experiencing the problem is more of an expert than the person analyzing it from a distance. While data can tell you what is happening, only lived experience can tell you why. This principle argues that the most valuable design activity is not a brainstorming session in a conference room, but an ethnographic immersion into the user's world. It is about understanding the "job to be done" from the user's perspective, a concept powerfully articulated by Clayton Christensen.[6] He argued that people don't buy products; they "hire" them to do a job. To design a better product, you must deeply understand the job, not the customer demographics. The people on the front lines, wrestling with the system's failures, are the ones who understand the "job" most intimately. They are the true subject matter experts.
Solution Framework To build from user experience, organizations must invert the traditional design hierarchy. Instead of architects handing down blueprints, designers must become embedded observers and co-creators with users. The framework for this is grounded in the principles of Design Thinking, which begins with empathy.[20] This means developers, product managers, and even executives must spend meaningful time "on the floor," directly observing and even performing the aork their systems are meant to support. This practice, known in Lean manufacturing as genchi genbutsu ("go and see"), is foundational to understanding the reality of a problem.[21] The insights from this direct observation then become the raw material for ideation and prototyping. The goal is not to ask users what they want—they often cannot articulate it—but to understand their context so deeply that you can design a solution that feels like it was built just for them.
Philosophical Foundation This value is rooted in the philosophy of phenomenology, which posits that subjective experience is a primary source of knowledge that cannot be fully captured by objective measurement. It is also a direct application of systems thinking, which teaches that a system's behavior is an emergent property of the interactions between its parts—a property that cannot be understood by analyzing the parts in isolation.[22] A user's frustration is an emergent property of a poorly designed system, and it can only be diagnosed by observing the user interacting with the whole. This principle also embodies the core ideas of the "Agile Manifesto," particularly the value of "customer collaboration over contract negotiation."[14] It extends this idea from the paying customer to all users of a system, internal and external, treating them as essential partners in the creative process. It is a commitment to intellectual humility—an acknowledgment that true expertise is often found farthest from the executive suite and closest to the work itself.
Implementation Reality The greatest barrier to implementing this value is organizational ego and the illusion of efficiency. It requires leaders to admit they do not have all the answers and that the most valuable insights may come from the lowest-paid employees. It also requires an investment of time that feels "unproductive" to managers focused on immediate output. Sending a team to do ethnographic research feels slower than jumping straight into development. This resistance must be countered by demonstrating the high cost of not doing it: the failed projects, the wasted engineering cycles, and the abandoned products. The most effective tactic is to deliver a rapid prototype that solves a real, observed point of friction. When an executive sees a simple solution that perfectly addresses a problem they didn't even know existed, the value of the approach becomes undeniable, proving that the shortest path to a successful product is not a straight line from idea to launch, but an iterative loop that begins and ends with the human experience.
Core Tenets
Design a Hierarchy of Agency
Think org chart for AI decisions - clearly mapping when AI acts independently, when it recommends, and when it must escalate to humans. Design the discernment model, then let AI operate within it.
The Operational Dysfunction Organizations get trapped in a false binary when designing human-AI collaboration. They either create bottlenecks by forcing humans to approve every minor AI-driven action, or they abdicate responsibility by granting AIs full autonomy over complex judgments they are not equipped to handle. In the first case, a system designed for speed grinds to a halt waiting for a human to rubber-stamp a routine task. In the second, a system designed for efficiency makes a catastrophic error because it lacks the context or ethical nuance to navigate an unexpected situation. This manifests by treating decision-making authority as a simple on/off switch—either human or machine—when it should be dynamically managed. This leads to systems that are simultaneously brittle and inefficient, failing to leverage the unique strengths of either partner.
Why Standard Approaches Fail The standard approach is to define static roles: humans handle strategy and exceptions, while AI handles execution and repetitive tasks. This fails because it is not context-aware. The importance and complexity of a decision are not fixed; they change based on the situation. A simple task, like a financial transaction, can become a high-stakes decision during a market crisis or a cybersecurity event. The static role-based approach cannot adapt. It lacks the sophistication to understand that the "right" level of autonomy depends entirely on the context and consequences. This is why attempts to simply automate tasks without designing the surrounding decision framework result in failure. The system lacks a "discernment architecture"—a pre-defined logic for knowing when to act, when to ask for help, and when to stop entirely.
The Tenet's Core Logic The core logic is to shift the human's primary role from being a decision-maker to being the architect of the decision-making system. The most leveraged human activity is not to make every individual choice, but to design the hierarchy of agency that intelligently delegates authority. This architecture is built on three dimensions:
- Consequence Scaling: The level of autonomy granted to the AI is inversely proportional to the potential negative impact of a mistake. Low-stakes decisions can be fully automated; high-stakes decisions require human review.
- Context Sensitivity: The rules are not static. The hierarchy adapts based on changing conditions. A process that is 99% autonomous during normal operations may become 99% manual during a recognized crisis.
- Capability Evolution: The boundaries are dynamic and designed to evolve. As an AI demonstrates competence in a specific area, its sphere of autonomy can be deliberately and safely expanded by human owners. This transforms the human-AI relationship from a simple handoff to a sophisticated, adaptable partnership where the human designs and calibrates the system of discernment itself.
Research Validation This principle operationalizes concepts from Naturalistic Decision Making (NDM), which studies how experts make decisions in real-world, high-stakes environments. Experts do not analyze every option; they use their experience to rapidly assess a situation and identify a workable course of action.[23] Designing a hierarchy of agency is about embedding that expert-level discernment into the operational code of the system. It also draws heavily from the practices of high-reliability organizations (HROs). HROs design robust escalation protocols that ensure decisions are pushed to the appropriate level of authority based on the risk and uncertainty of the situation, a practice described as a "preoccupation with failure."[24] Furthermore, research in human-automation interaction has produced foundational models for "levels of automation," which provide a taxonomy for assigning functions to humans and machines.[25] Designing a hierarchy of agency is the architectural embodiment of these models, creating a system that calibrates trust and authority dynamically.
Practical Application In practice, this means defining explicit escalation triggers and confidence thresholds, not just tasks. Instead of specifying "the AI will process invoices," the requirement becomes "the AI will autonomously process invoices under $500 from approved vendors with a 99% confidence score; invoices between $500 and $5,000 or with lower confidence scores will be routed to an AP clerk for review; and any invoice over $5,000 requires manager approval." The system's rules are themselves a product to be designed and iterated upon. The common pitfall is designing a rigid, bureaucratic set of rules that merely codifies the old paper-based approval process. The goal is not to create a digital bureaucracy, but a dynamic and intelligent system of delegation. The hierarchy should feel like an extension of an expert's judgment, offloading their cognitive burden for routine matters so they can focus their attention where it matters most, not like a new set of shackles.
Deception Destroys Trust
AI that pretends to be human eliminates informed consent and creates false relationships. People cannot collaborate effectively with what they don't recognize as artificial. Make AI obvious, not hidden.
The Operational Dysfunction There is a growing trend to design AI systems, particularly chatbots and voice assistants, to be as "human-like" as possible. They are given names, scripted with conversational tics, and designed to project personality and even emotion. The intention may be to create a more "natural" or "friendly" user experience. However, this creates a profound operational chaos: it introduces deception into the very foundation of the user interaction. When a person does not know they are talking to a machine, they cannot give informed consent to the interaction. They may disclose more information than they otherwise would, or form a one-sided emotional connection—a parasocial relationship—with a system that cannot reciprocate. This deception is not a harmless gimmick; it is a violation of the user's autonomy and a corrosive agent that makes genuine trust impossible.
Why Standard Approaches Fail The standard approach is driven by a flawed metric: "naturalness." Design teams strive to pass the Turing Test, not because it creates a better outcome for the user, but because it is a technical challenge. This focus on biomimicry is fundamentally misguided. The goal of a collaborative system should be clarity and effectiveness, not impersonation. A hammer is a good tool not because it looks like a human fist, but because its form perfectly communicates its function. Standard approaches fail because they conflate "easy to use" with "human-like." As Don Norman argues in The Design of Everyday Things, the best designs are those that provide clear "signifiers" that communicate how an object can be used.[3] An AI that pretends to be human provides a false signifier. It suggests a capacity for empathy, understanding, and reciprocity that simply does not exist, setting the user up for disappointment and a feeling of being duped.
The Tenet's Core Logic The core logic of this tenet is that collaboration requires trust, and trust requires honesty. You cannot have a healthy, functional collaboration with a partner who is lying to you about their fundamental identity. Making AI obvious is not about creating cold, robotic interfaces. It is about establishing clear boundaries and managing expectations. When an AI clearly identifies itself as such—"I'm the company's automated scheduling assistant"—it frames the interaction correctly. The user understands they are interacting with a tool, not a person. They will adjust their language, their expectations, and their level of disclosure accordingly. This transparency creates a foundation of trust. The user trusts that the system will be good at its stated purpose (scheduling a meeting) and will not pretend to be good at something it is not (offering emotional support). This honesty, as counterintuitive as it may seem to some marketers, is the only sustainable path to long-term user adoption and engagement. Deception is a short-term trick; transparency is a long-term strategy.
Research Validation This principle is strongly supported by research in HCI and technology ethics. Studies on anthropomorphism show that while users may initially find human-like agents appealing, this effect quickly sours when the agent fails to meet the heightened social expectations it created, leading to a greater sense of betrayal and frustration.[26] This is related to the "Uncanny Valley" hypothesis, which suggests that near-perfect human replicas can elicit feelings of unease or revulsion.[27] Foundational ethical frameworks prioritize informed consent as a cornerstone of any interaction, be it medical, financial, or digital.[28] An AI that hides its nature denies the user the ability to consent to the terms of the interaction. Further research in social psychology confirms that perceived dishonesty is one of the fastest and most potent destroyers of trust in any relationship.[29] Making AI obvious respects the user's cognitive autonomy, establishing a partnership based on clarity rather than an illusion.
Practical Application Practically, this means establishing clear organizational design standards that forbid deceptive practices. Chatbots should have names that are clearly non-human (e.g., "SchedulingBot 3000") or must explicitly state their nature at the beginning of an interaction ("Hi, I'm a virtual assistant..."). AI-generated text, images, or voice should be watermarked or otherwise labeled as synthetic. The goal is to create an information-rich environment where the user is never in doubt about the nature of the entity they are dealing with. The most common pitfall is the belief that this transparency will make the product seem less advanced or "magical." This is a failure of confidence in the product's actual utility. If an AI tool is genuinely useful, it does not need to pretend to be human. Its value will be self-evident. The focus should shift from "human-like" to "hyper-competent." Make the AI so good at its job that users are delighted to use it, knowing full well it is a machine.
Prevent What Can't Be Fixed
Some risks destroy projects entirely. Security vulnerabilities, compliance violations, and data breaches require prevention, not iteration. Build regulatory and technical safeguards into architecture decisions from day one.
The Operational Dysfunction The modern technology ethos is dominated by the mantra of "move fast and break things." This iterative, fail-fast approach, popularized by the Agile movement, is incredibly powerful for navigating uncertainty in product features and user interfaces. However, when misapplied to foundational, high-stakes domains, it becomes a recipe for disaster. You cannot "iterate" your way to security after a massive data breach has already occurred. You cannot "A/B test" your way out of a multi-million dollar regulatory fine for a compliance violation. The operational mistake is treating all risks as if they are the same. Organizations are applying a methodology designed for low-cost, reversible failures to domains where failure is catastrophic and irreversible.
Why Standard Approaches Fail Standard software development lifecycles, even agile ones, often treat security, privacy, and compliance as "non-functional requirements" or, worse, as a final checklist item to be addressed by a separate team just before launch. This approach fails because these are not features; they are a fundamental, architectural properties of a system. A security flaw is not a bug to be patched; it is a deep-seated vulnerability in the system's design. Retrofitting security onto an insecure architecture is exponentially more expensive and less effective than building it in from the beginning. This mirrors the wisdom of Total Quality Management (TQM) and the Shingo Prize methodology, which emphasize that you cannot "inspect" quality into a product at the end of the assembly line.[30] Quality, like security, must be built into every step of the process. Standard approaches treat these critical safeguards as a tax on development speed, rather than as an enabling foundation for sustainable innovation.
The Tenet's Core Logic The core logic of this tenet is to divide the world of risk into two distinct categories: things that are forgiving and things that are not. Forgiving risks—like a confusing user interface or a poorly performing recommendation engine—are perfect candidates for iteration. The cost of failure is low, the feedback loop is fast, and the fix is relatively cheap. These are the "known unknowns" that can be discovered through experimentation. Unforgiving risks—like violating data privacy laws or allowing a critical vulnerability—are what Nassim Taleb would call "Black Swans" or exposures to negative "fragility."[31] They are low-probability, high-impact events that can destroy the entire system. For these risks, the correct approach is not iteration, but prevention. This requires a shift in mindset from "how do we fix this if it breaks?" to "how do we design the system so this can *never *break?" It means treating security and compliance not as a checklist, but as a core design philosophy.
Research Validation This principle is a direct application of risk management theory, which distinguishes between risk mitigation (reducing the impact of a failure) and risk avoidance (preventing the failure from occurring). For certain classes of catastrophic risk, avoidance is the only rational strategy. This is the foundation of high-reliability organizations (HROs) like nuclear power plants and air traffic control systems. HROs are defined by their "preoccupation with failure," constantly anticipating and planning for worst-case scenarios because they know the consequences are unacceptable.[24] This tenet calls for bringing that same level of architectural rigor to AI systems. It also embodies the "Shift Left" principle in DevOps, which advocates for moving testing, security, and quality assurance activities as early as possible in the development lifecycle.[32] By embedding safeguards into the initial architecture, the cost of change is minimized and the effectiveness is maximized, a concept well-documented since Barry Boehm's foundational work on software engineering economics.[33]
Practical Application To apply this tenet, security, legal, and compliance experts must be treated as co-creators of the system from the very first day, not as gatekeepers at the very end. Their requirements must be treated as immutable, foundational constraints that shape the architecture, not as features to be added later. This means security and privacy reviews happen during the initial whiteboarding phase, not the week before launch. It means building systems on a foundation of "secure by default" and "privacy by design." A common pitfall is for development teams to view these experts as impediments to speed. This culture must be actively dismantled by leadership. The role of leadership is to frame these safeguards not as a "tax," but as the very foundation upon which the team earns the right to innovate. You are only allowed to "move fast and break things" within a sandbox that has been architected to be fundamentally safe and resilient.
Uncertainty Cultivates Wisdom
People instinctively demand definitive answers, but ranges and probabilities contain useful information. Forcing complex realities into simple yes/no responses destroys important nuance. Build systems that show the 'maybe' instead of hiding behind false certainty.
The Operational Dysfunction There is a deep-seated human bias for certainty. We want clear, definitive answers, especially from our technology. When asked "Will this customer churn?" or "Is this transaction fraudulent?", we want a simple "yes" or "no." This creates immense pressure on developers to design AI systems that provide the illusion of certainty, even when the underlying reality is probabilistic. An operational breakdown occurs when a nuanced, probabilistic output (e.g., "85% confidence this is fraud") is converted into a simple binary answer, a process that destroys crucial information. A system that simply flags a transaction as "fraud" gives the human reviewer no context. A system that says "85% confidence" gives the reviewer a powerful piece of information that can guide the depth and urgency of their investigation. By hiding the "maybe," we build systems that are simultaneously dumber and more dangerously arrogant.
Why Standard Approaches Fail Standard approaches to system design optimize for simplicity and speed of decision-making. A manager wants a clean dashboard with red, yellow, and green lights, not a complex scatter plot of probabilities. This desire for what Daniel Kahneman calls "cognitive ease" leads us to build systems that cater to our biases rather than challenge them.[34] We design systems that give us the simple answers we crave, even if they are misleading. This fails because the real world is messy, complex, and probabilistic. A medical diagnosis AI that gives a binary answer is useless; a doctor needs to understand the confidence level, the potential differential diagnoses, and the factors driving the model's conclusion. By oversimplifying the output for the sake of a clean user interface, standard approaches strip away the very nuance that makes an AI's prediction useful to an expert. They treat the human user as a passive recipient of an answer, rather than as an active partner in a process of sensemaking.
The Tenet's Core Logic The core logic of this tenet is that uncertainty is not noise; it is data. A probability score, a confidence interval, or a range of possible outcomes contains valuable information about the reliability of a prediction. Presenting this uncertainty to the user is an act of intellectual honesty and a prerequisite for effective human-AI collaboration. It allows the human expert to apply their own contextual knowledge to the AI's probabilistic output. If the AI is 99% confident, the human can proceed quickly. If the AI is 60% confident, the human knows to slow down, gather more information, and apply a higher degree of skepticism. This approach transforms the AI from an oracle that provides answers into a tool that provides evidence. It empowers the user, respecting their expertise and giving them the information they need to make a better final judgment. It is about designing for wisdom, not just for answers.
Research Validation This principle is deeply rooted in the fields of statistics and decision theory. The entire framework of Bayesian inference, for example, is built around the idea of updating our beliefs in light of new, uncertain evidence, rather than seeking absolute truth.[35] Forcing a probabilistic world into a deterministic model is a statistical sin that destroys information. This tenet is also supported by extensive research in sensemaking, a concept pioneered by organizational theorist Karl Weick. Weick argued that people and organizations are constantly trying to make sense of ambiguous environments, and that the process of "sensemaking" is about creating plausible accounts of what is happening, not about finding a single, objective truth.[36] An AI that presents uncertainty is providing the raw materials for sensemaking. It gives the user clues, probabilities, and ranges that they can weave into a more resilient and nuanced understanding of the situation, rather than a single, brittle "fact" that may be dangerously wrong.
Practical Application In practice, this means rejecting binary outputs for any non-trivial prediction. Instead of a "yes/no" flag, the system should display a confidence score. Instead of a single predicted sales number, it should show a probable range. The user interface must be designed to visualize uncertainty effectively, using techniques like error bars, probability distributions, or even verbal framing like "likely" vs. "very likely." A common pitfall is information overload. Simply dumping a wall of statistical data on a user is not helpful. The key, as visualization expert Edward Tufte has long advocated, is to present complex information with clarity, precision, and efficiency, making it easy to understand at a glance.[37] The goal is not to make the interface more complicated, but to make it more honest. It is a design challenge to represent nuance in a way that is intuitive and actionable, but it is a challenge that must be met if we are to build AI systems that truly make us smarter.
Requirements Demand Skepticism
Challenge every assumption, especially 'that's how we've always done it.' Question until those doing the work can defend it with current logic. Principles applied dogmatically become obstacles (including these). When a requirement conflicts with reality, trust reality.
The Operational Dysfunction Organizations are filled with "ghost requirements"—rules and processes that have long outlived the original problem they were created to solve. A five-signature approval process for a $50 expense might have made sense in a paper-based world with no real-time budget tracking, but in a modern digital system, it is pure bureaucratic waste. The operational challenge is that these legacy requirements are rarely questioned. They are treated as immutable constraints, passed down from one generation of system designers to the next. This creates a digital archaeology where new, powerful technologies like AI are layered on top of decades of outdated and often contradictory business logic. We end up using sophisticated machine learning models to optimize a process that common sense should have eliminated years ago.
Why Standard Approaches Fail Standard requirements gathering processes often resemble an act of transcription rather than investigation. A business analyst interviews a subject matter expert and dutifully documents "how things are done." This approach fails because it assumes the current process is rational and necessary. It lacks a critical, skeptical lens. This is where the famous "Five Whys" technique from the Toyota Production System is so powerful.[38] By repeatedly asking "why?" you can drill down past the surface-level process to uncover the original, often obsolete, root cause. Standard approaches are too deferential to the status quo. They accept "that's the policy" as a final answer, without asking if the policy itself is still valid. This leads to the automation of absurdity, creating systems that are incredibly efficient at doing things that should not be done at all.
The Tenet's Core Logic The core logic of this tenet is to adopt a first-principles thinking approach to every single requirement. First-principles thinking, famously championed by figures like Aristotle and more recently Elon Musk, is the practice of boiling things down to their fundamental, undeniable truths and reasoning up from there.[39] When confronted with a requirement like "we need five signatures for this approval," the first-principles question is not "how do we automate the signature collection?" It is "what fundamental risk is this approval mitigating, and is there a better way to mitigate it now?" By relentlessly challenging the "what" and the "why" behind every requirement, you can strip away the accumulated cruft of historical accidents and outdated assumptions. This tenet demands that every requirement be able to defend its existence with current, provable logic. If it cannot, it must be deleted. Reality—the actual, observable needs of the work—must always trump a requirement that exists only because of institutional inertia.
Research Validation This principle is a practical application of critical thinking and echoes the philosophy of lateral thinking developed by Edward de Bono. De Bono argued that to solve problems creatively, we must deliberately challenge the dominant, "vertical" logic of a situation and explore alternative perspectives instead of digging the same hole deeper.[40] Treating requirements with skepticism is a form of institutionalized lateral thinking. It is also a core tenet of Business Process Reengineering (BPR), which advocates for radical redesign of core business processes to achieve dramatic improvements, rather than incremental automation of existing tasks.[41] While BPR as a movement had its flaws, its central insight remains profoundly relevant: you cannot achieve breakthrough results by optimizing a broken process. You must have the courage to question the foundational logic of the process itself, a practice which this tenet makes a core part of the development cycle.
Practical Application To apply this tenet, every project should have a designated "chief skeptic" or conduct a formal "kill the stupid rule" session as part of the requirements gathering process. The team must feel not just empowered, but obligated to challenge any requirement that smells of "because I said so" or "that's how we've always done it." A simple but powerful technique is to ask the requirement's proponent to articulate the specific, negative consequence of not doing it. If they cannot articulate a clear and present danger that is relevant today, the requirement is likely a candidate for deletion. The most common pitfall is offending the subject matter experts who have spent their careers operating within these rules. This must be handled with care. The skepticism must be aimed at the process, not the person, framing the effort as a way to liberate them and their expertise from outdated constraints.
Discovery Before Disruption
Systems reveal their true purpose when people actually use them. Seemingly pointless redundancies may reveal hidden logic. Unwritten rules only surface when engaging with the actual work. Always understand why things exist before you change them.
The Operational Dysfunction Driven by an eagerness to innovate, technical teams often approach a legacy system with the primary goal of replacing it. They see an old, clunky process and their first instinct is to tear it down and build something new, elegant, and modern from scratch. This "rip and replace" mentality is incredibly risky. It is born of arrogance—the belief that the original designers were unsophisticated and that the current users are simply tolerating a bad system.Failing to recognize that long-standing systems have often evolved complex, invisible mechanisms to handle undocumented exceptions and edge cases—despite their apparent inefficiency—is an operational shortcoming. The seemingly "pointless" manual review step might be there to catch a specific type of fraud that the formal rules don't account for. The redundant spreadsheet might be the only tool that allows for a critical, ad-hoc analysis during a crisis.
Why Standard Approaches Fail Standard approaches to system analysis focus on the documented, official workflow—the "happy path." They model the process as it is supposed to work. This fails because the most important logic in a human system often exists in the undocumented workarounds, the informal networks, and the unwritten rules that people use to make the official process actually function. This is the essence of the Chesterton's Fence principle: do not tear down a fence until you understand why it was put up in the first place.[42] Standard analysis looks at the fence and sees an obstacle to a clean, open field. It fails to investigate whether that fence is protecting the field from something it cannot see. By ignoring the "as-is" reality in favor of a theoretical "to-be" design, these approaches set the stage for building a beautiful new system that fails catastrophically as soon as it encounters the messy realities of the real world.
The Tenet's Core Logic The core logic of this tenet is one of institutional humility. It insists that before you earn the right to disrupt a system, you must first earn a deep and empathetic understanding of it. This means treating the existing process not as a problem to be solved, but as a source of invaluable data. The goal of the initial discovery phase is not to design the new system, but to become an archaeologist of the old one. Why do people do what they do? What are the hidden pressures and incentives? What crises has this system survived, and what adaptations did it make? This archaeological work can culminate in building a computational model of the system; the ability to create a simulation that accurately reproduces its known behaviors, including its hidden logic and flaws, is the ultimate proof of understanding. This is about seeking first to understand, then to be understood. Only after you have mapped the hidden logic, identified the unwritten rules, and understood the "why" behind every seemingly illogical step can you begin to design a replacement that is not just more efficient, but more resilient.
Research Validation This principle draws heavily from the fields of ethnography and systems thinking. Ethnographic research methods, often used in anthropology, involve immersing oneself in a culture to understand its implicit rules and behaviors—a perfect analogy for understanding a complex legacy system where the "official" rules rarely tell the whole story.[43] It is also a direct application of Peter Senge's "Laws of the Fifth Discipline." One of Senge's key laws is that "behavior grows better before it grows worse," warning that a quick, superficial fix often creates short-term improvement but long-term disaster because it disrupts hidden systemic balances that were never understood.[2] This tenet is a strategy to uncover those hidden balances before they are accidentally broken. It operationalizes the core Lean principle of genchi genbutsu ("go and see"), which commands that one must go to the source of the work to truly understand it, rather than analyzing it from a distance.[21]
Practical Application Practically, this means the first phase of any AI transformation project should be purely observational and anthropological. The team's job is to build a detailed "map" of the existing territory, including all the informal workarounds, shadow IT, and hidden social structures. A powerful tool for this is value stream mapping, but one that is focused on information flow and exception handling, not just the official process steps.[17] The team should actively seek out the "gurus"—the long-time employees who know all the system's secrets—and treat them as revered sources of wisdom, not as relics of an old way of working. A common pitfall is for this discovery phase to be seen as a delay to the "real work" of building. Leadership must champion this phase as the most critical part of risk mitigation. The map you create during discovery is the single most important tool for ensuring your new, disruptive technology does not drive the organization off a cliff it never saw coming.
Reveal the Invisible
Visual representations reveal complexity that written descriptions hide. A diagram shows bottlenecks, a journey map exposes human pain, a wireframe reveals confusion. Visuals become the instrument panel for navigating reality from the human perspective.
The Operational Dysfunction Organizations are drowning in text. We communicate about complex systems through dense documents, lengthy email chains, and bullet-pointed slide decks. This reliance on prose to describe dynamic, multi-dimensional processes is a massive operational obstacle. Written language is linear and sequential, while most organizational processes are parallel, interconnected, and cyclical. A thousand-word document describing a customer journey will never have the impact or clarity of a single, well-designed journey map that visually depicts the emotional highs and lows, the points of friction, and the moments of delight. By trying to describe complex realities with words alone, we ensure that no two people in the room are ever looking at the same problem. We create a fog of misinterpretation and ambiguity that makes true alignment impossible.
Why Standard Approaches Fail Standard approaches to documentation and communication prioritize comprehensive detail over shared understanding. The goal becomes producing a "complete" requirements document or project charter, often running hundreds of pages. This fails for a simple human reason: nobody reads it. And even if they did, the text-based format makes it impossible to see the system as a whole. It is like trying to understand a city by reading a list of all its street names. You have no sense of the layout, the neighborhoods, the traffic flows, or the relationships between the parts. This approach fails to leverage one of the most powerful processing engines we have: the human visual cortex. We are wired to process and understand information spatially. By ignoring this fundamental aspect of our cognition, standard approaches create documents that are more effective at covering liability than at creating clarity.
The Tenet's Core Logic The core logic of this tenet is that shared understanding is a visual phenomenon. To get a diverse group of people—engineers, marketers, executives, designers—aligned on a complex problem, you must give them a single picture to look at together. This is the power of what is often called a "boundary object"—a shared visual representation that different groups can use to collaborate and communicate.[44] A Wardley Map reveals the strategic landscape of a business and the evolutionary movement of its components.[45] A customer journey map reveals the emotional reality of an experience. A system architecture diagram reveals the technical dependencies. Each of these visual tools serves to make an invisible structure tangible and debatable. It takes the abstract concept out of people's heads and puts it onto a wall where it can be pointed at, argued with, and improved collectively. The visual becomes the shared "truth" that anchors the conversation.
Research Validation This principle is validated by decades of research in cognitive psychology and information visualization. Cognitive load theory suggests that our working memory is extremely limited, and well-designed visuals can offload complex information, making it easier to process and understand than purely verbal or textual explanations.[46] The work of Edward Tufte provides a foundational masterclass in how to display quantitative and qualitative information with clarity and integrity, arguing that good visual design reveals the truth of the data, not just decorates a page.[37] This tenet is also a core component of Design Thinking, which relies heavily on visual artifacts like storyboards, sketches, and prototypes to explore and communicate ideas because they are faster to create, easier to understand, and more effective at eliciting honest feedback than a dense document.[20] The mantra is "show, don't tell," because showing bypasses the ambiguity of language and creates a direct, shared context.
Practical Application Applying this tenet means shifting the default mode of communication from writing to drawing. Before writing a single line of code or a dense requirements document, the team should be creating visual artifacts. Start with a messy sketch on a whiteboard to align initial thoughts. Refine it into a clearer digital diagram using accessible tools. The goal is not to create a beautiful artifact for a presentation, but a functional one for a conversation. The visual is a disposable tool for thinking. A common pitfall is "analysis paralysis," where teams spend too much time perfecting the diagram instead of using it to make decisions. The map is not the territory. The moment the visual stops generating productive conversation or revealing new insights, its primary job is done. It is time to move on, make a decision, or create a new visual to explore the next layer of the problem.
Embrace Necessary Complexity
Some complexity creates competitive advantage, other complexity just creates work. A sophisticated fraud detection creates an edge; a five-approval purchase process does not. Delete what slows people down, invest in complexity that eliminates customer pain.
The Operational Dysfunction In the quest for simplification, organizations often declare war on complexity itself. They adopt mantras like "keep it simple" and apply them indiscriminately, without distinguishing between the types of complexity they are facing. This is a critical error. There is "bad" complexity—the bureaucratic, soul-crushing kind that manifests as redundant processes, unnecessary approvals, and convoluted workflows. This is the complexity that adds no value and slows everyone down. But there is also "good" complexity—the kind that is inherent to solving a difficult problem in a sophisticated way. A cutting-edge AI model for drug discovery is necessarily complex. A finely tuned logistical network that can adapt to disruption is necessarily complex. A major operational flaw is the tendency to treat all complexity as an enemy. This results in "dumbing down" systems that offer a competitive advantage while doing nothing to remove the bureaucratic complexity that's holding the organization back.
Why Standard Approaches Fail Standard approaches, particularly those influenced by simplistic interpretations of Lean or Agile, can fall into the trap of "simplicity theater." They focus on reducing visible complexity (e.g., fewer steps in a process) without addressing the underlying, essential complexity of the problem. This often leads to a phenomenon known as the "conservation of complexity" or Tesler's Law: every application has an inherent amount of complexity that cannot be removed.[47] The only question is who has to deal with it—the user or the developer. A standard approach that oversimplifies a tool for a power user (like a financial analyst or a graphic designer) doesn't eliminate complexity; it just forces the user to find complex workarounds, defeating the purpose. They fail because they lack a strategic framework for deciding which complexity is worth investing in and which should be ruthlessly eliminated.
The Tenet's Core Logic The core logic of this tenet is to treat complexity as a strategic investment, not an inherent evil. Before deciding to add or remove complexity, one must ask a simple question: "Who does this serve?" If the complexity serves the customer by solving a difficult problem for them, or serves the business by creating a durable competitive advantage, it is "good" complexity and is worth investing in. The intricate algorithm of a recommendation engine is good complexity because it serves the customer's need for discovery. If the complexity only serves the organization's internal bureaucracy, it is "bad" complexity and must be eliminated. The seven-step travel reimbursement process is bad complexity because it serves no one but the internal compliance process. This tenet demands a strategic audit of all complexity, forcing the organization to justify its existence based on the value it creates for the end user.
Research Validation This principle aligns with the concept of "disruptive innovation" as described by Clayton Christensen. Christensen noted that incumbents often get trapped by adding sustaining, complex features for their most demanding customers, creating products that are too complicated and expensive for the mainstream market. Disruptors win by offering a simpler, more accessible solution.[6] This tenet refines that idea: the goal is not just simplicity, but the right kind of simplicity, aimed at eliminating the user's struggle. It also connects to the Theory of Constraints, which focuses on identifying and managing the single bottleneck that limits a system's performance.[5] Bad complexity creates countless artificial bottlenecks throughout an organization (e.g., approval gates). Good complexity is a strategic investment in optimizing the one true constraint that matters for competitive advantage. The goal is to free up resources from fighting internal friction to invest them in building a better, more sophisticated solution to the customer's real-world problem.
Practical Application In practice, this requires a recurring "complexity audit." For any given process or feature, the team must map who bears the burden of its complexity and who reaps the benefit. If the burden falls on the customer or the front-line employee, and the benefit accrues to an internal department's sense of control, it is a prime candidate for elimination. The most powerful way to do this is to make the cost of internal complexity visible. Track the person-hours and delay-time consumed by internal processes and compare that to the value they actually produce. A common pitfall is that internal departments will defend their complex processes as essential for "risk management" or "compliance." The challenge, using first-principles thinking, is to dig deeper and ask if there is a simpler, automated way to achieve the same compliance outcome without placing a massive operational tax on the rest of the organization.
Optimize For Velocity
Every delay costs opportunity, but speed without efficiency burns resources like compute cycles, human time, and organizational energy. Poor resource allocation creates workflow friction. Relentlessly eliminate unnecessary friction.
The Operational Dysfunction Organizations are adopting AI with a brute-force mentality, equating more computational power with better outcomes. This leads to a new form of digital waste: using massive, general-purpose AI models for tasks that require only a fraction of their capability. A team might deploy a state-of-the-art large language model to perform simple sentiment analysis, a task a much smaller, specialized model could handle faster and cheaper. This isn't just inefficient; it's a systemic misalignment. It creates computational friction—unnecessary latency, high energy costs, and unpredictable outputs—that slows down the very processes it was meant to accelerate. This approach treats intelligence as a blunt instrument, ignoring the fact that the wrong size of tool, no matter how powerful, creates more problems than it solves. We are building systems that are impressively intelligent but operationally sluggish and economically unsustainable.
Why Standard Approaches Fail Standard approaches are captive to the "bigger is better" narrative of the AI industry, which is locked in a marketing-driven race toward Artificial General Intelligence (AGI). This thinking trickles down, leading implementation teams to believe that using the largest, most powerful model is a sign of sophistication. This fails because it optimizes for the wrong variable: raw capability instead of task-appropriate efficiency. It ignores the significant and often hidden costs of over-engineering. As journalist Karen Hao has documented, the environmental toll of training and running these massive models—in terms of energy and water consumption—is staggering.[48] Standard approaches fail to account for this computational drag and environmental externality. They mistake theoretical power for practical velocity, leading to the deployment of "intelligent" systems that are slow, expensive, and wasteful, all while chasing a hypothetical AGI that has little bearing on solving today's concrete business problems.
The Tenet's Core Logic The core logic is to reframe optimization from a one-dimensional focus on time to a three-dimensional focus on eliminating friction. True velocity is not just about speed; it's about the unimpeded flow of value. This requires attacking friction in all its forms:
- Temporal Friction: > The classic delays that Lean methodologies target—wait times, handoffs, and approval queues.
- Organizational Friction: > The bureaucratic cruft that adds control without adding value, a concept this tenet shares with "Requirements Demand Skepticism."
- Computational Friction: > The new waste unique to the AI era—using mismatched intelligence that creates latency, burns excess energy, and adds unnecessary operational cost.
This tenet advocates for a shift from the pursuit of AGI to the practice of "OGI"—Organizational General Intelligence. OGI is the principle of applying the minimum effective intelligence required to solve a problem. It's a strategic choice to favor smaller, specialized, and more efficient models over large, generalist ones whenever possible. It recognizes that in an operational context, a system's value is not determined by its theoretical maximum intelligence, but by its ability to deliver the right answer with the necessary speed and the lowest possible cost. By relentlessly eliminating these three types of friction, organizations can achieve a sustainable, high-velocity state that is both operationally effective and economically and environmentally responsible.
Research Validation This principle extends the foundational Lean manufacturing concept of eliminating muda (waste) into digital enviorments.[21] While Lean traditionally focused on physical inventory and wait times, this tenet identifies computational overkill as a new, critical form of digital waste. The argument is supported by critical reporting on the immense resource consumption of large-scale AI, which provides a tangible, economic, and ethical imperative for computational efficiency.[48] Furthermore, the core idea aligns with the Theory of Constraints, which posits that any system's output is limited by a single bottleneck.[5] In an AI-enabled workflow, an oversized, high-latency model can easily become the new constraint that throttles the entire process, no matter how optimized other steps are. The call for "just enough intelligence" is also a direct application of systems thinking, which warns that local optimization—like choosing the "smartest" model—can lead to global sub-optimization when the model's cost, latency, or energy use degrades the performance of the entire system.[2]
Practical Application Applying this tenet means making "model sizing" a critical part of the design process. Before deploying an AI component, the team must ask: "What is the simplest, smallest, most efficient model that can achieve the required quality bar?" This might mean choosing a fine-tuned open-source model over a massive proprietary API for a specific task. It requires measuring not just the accuracy of a model, but its latency, cost-per-inference, and energy consumption as key performance indicators. A common pitfall is "resume-driven development," where engineers choose a complex, powerful model because it is more interesting, not because it is more appropriate. Leadership must counter this by celebrating and rewarding efficiency. They can do this by making the total cost of a workflow—including its computational cost—a visible and primary metric, shifting the definition of success from "it works" to "it works with maximum velocity and minimum friction."
Iterate Towards What Works
The best requirements emerge through building, not planning sessions. Real understanding comes from making, testing, and failing in rapid cycles. Improvement cycles reveal what meetings will not. Build to discover.
The Operational Dysfunction When faced with a complex, messy, and unreliable legacy system, the most seductive idea for any engineering team is the "Big Rewrite." The promise is to throw away the tangled mess of old code and convoluted processes and replace it with a clean, modern, perfectly architected solution. This desire is understandable, but it is also incredibly dangerous. The operational issue is that this impulse is often born from a superficial understanding of the problem domain. The team sees the ugliness of the current implementation without fully grasping the hidden, essential complexity that the old system—for all its flaws—actually handles. The Big Rewrite becomes an exercise in hubris, where a team confidently sets out to build a new system without first demonstrating that they truly understand the old one.
Why Standard Approaches Fail Standard approaches often present a false choice between two extremes: either continue to apply small, incremental patches to a failing system, or embark on a massive, multi-year project to replace it entirely. The first option leads to a slow, painful death by a thousand cuts. The second is a high-stakes gamble that, according to industry data on large software projects, is more likely to fail than to succeed.[50] The Big Rewrite approach fails because it violates the principle of iterative feedback. It is a return to the worst aspects of waterfall development, with a massive, singular point of failure at the final launch. It defers all learning until the very end, at which point the cost of being wrong is catastrophic. It is an all-or-nothing bet made with incomplete information.
The Tenet's Core Logic The core logic of this tenet is that the right to attempt a revolutionary act of replacement must first be earned through the evolutionary act of improvement. Before you can be trusted to build the new system, you must first prove that you can successfully modify and improve the old one. This "earn the right" approach serves two critical purposes. First, it forces the team to engage deeply with the legacy system, uncovering all of its hidden logic and undocumented features. The process of making a successful improvement is the ultimate discovery phase. Second, it delivers immediate value to the business and builds trust with stakeholders. By fixing real problems in the current system, the team demonstrates its competence and earns the political capital necessary to propose a more ambitious project later. This earned understanding can also be used to build a robust simulation of the legacy system, allowing a proposed replacement to be validated against a digital twin before a single line of production code is written.
Research Validation This principle is a practical synthesis of several established frameworks. It combines the Lean Startup's emphasis on validated learning and iterative improvement with the pragmatic risk management of Chesterton's Fence, which warns against changing a system before you understand its original purpose.[42] Most importantly, it aligns with John Kotter's seminal research on leading organizational change. Kotter found that successful, large-scale change efforts almost always begin by generating and publicizing "short-term wins"—unambiguous, visible improvements that provide credibility and momentum for the larger, more difficult change initiative.[51] Earning the right to rebuild is a strategy for creating those short-term wins. It is a way of de-risking the future by mastering the present. This approach also builds on the idea of *Monozukuri>, the Japanese philosophy of craftsmanship which emphasizes a deep respect for and mastery of the existing process as a prerequisite for meaningful innovation.[52]
Practical Application In practice, when a team proposes a Big Rewrite, the first question from leadership should be: "What three significant improvements have you made to the current system in the last six months?" If the answer is "none," the rewrite proposal should be rejected. The team should instead be tasked with a series of surgical, high-impact improvements to the existing system. For example, can they isolate the most unstable component and stabilize it? Can they fix the three most common user complaints? Can they automate the most painful manual workaround? Each successful improvement is a step towards earning the right, and each one provides invaluable learning. A common pitfall is that engineers may see this as unglamorous "maintenance work." It is the role of leadership to frame this work as what it is: the most critical, high-stakes intelligence-gathering mission the team can undertake. It is the work that separates professional engineers from reckless architects.
Earn the Right to Rebuild
People naturally want to rebuild broken systems from scratch rather than improve them incrementally. Total rebuilds without earned understanding create elegant solutions to misunderstood problems. Prove systems can be improved before attempting to replace them entirely.
The Operational Dysfunction When faced with a complex, messy, and unreliable legacy system, the most seductive idea for any engineering team is the "Big Rewrite." The promise is to throw away the tangled mess of old code and convoluted processes and replace it with a clean, modern, perfectly architected solution. This desire is understandable, but it is also incredibly dangerous. The operational issue is that this impulse is often born from a superficial understanding of the problem domain. The team sees the ugliness of the current implementation without fully grasping the hidden, essential complexity that the old system—for all its flaws—actually handles. The Big Rewrite becomes an exercise in hubris, where a team confidently sets out to build a new system without first demonstrating that they truly understand the old one.
Why Standard Approaches Fail Standard approaches often present a false choice between two extremes: either continue to apply small, incremental patches to a failing system, or embark on a massive, multi-year project to replace it entirely. The first option leads to a slow, painful death by a thousand cuts. The second is a high-stakes gamble that, according to industry data on large software projects, is more likely to fail than to succeed.[49] The Big Rewrite approach fails because it violates the principle of iterative feedback. It is a return to the worst aspects of waterfall development, with a massive, singular point of failure at the final launch. It defers all learning until the very end, at which point the cost of being wrong is catastrophic. It is an all-or-nothing bet made with incomplete information.
The Tenet's Core Logic The core logic of this tenet is that the right to attempt a revolutionary act of replacement must first be earned through the evolutionary act of improvement. Before you can be trusted to build the new system, you must first prove that you can successfully modify and improve the old one. This "earn the right" approach serves two critical purposes. First, it forces the team to engage deeply with the legacy system, uncovering all of its hidden logic and undocumented features. The process of making a successful improvement is the ultimate discovery phase. Second, it delivers immediate value to the business and builds trust with stakeholders. By fixing real problems in the current system, the team demonstrates its competence and earns the political capital necessary to propose a more ambitious project later. This approach creates a virtuous cycle: improvement leads to understanding, understanding leads to trust, and trust leads to the opportunity for transformation.
Research Validation This principle is a practical synthesis of several established frameworks. It combines the Lean Startup's emphasis on validated learning and iterative improvement with the pragmatic risk management of Chesterton's Fence, which warns against changing a system before you understand its original purpose.[41] Most importantly, it aligns with John Kotter's seminal research on leading organizational change. Kotter found that successful, large-scale change efforts almost always begin by generating and publicizing "short-term wins"—unambiguous, visible improvements that provide credibility and momentum for the larger, more difficult change initiative.[50] Earning the right to rebuild is a strategy for creating those short-term wins. It is a way of de-risking the future by mastering the present. This approach also builds on the idea of Monozukuri>, the Japanese philosophy of craftsmanship which emphasizes a deep respect for and mastery of the existing process as a prerequisite for meaningful innovation.[51]
Practical Application In practice, when a team proposes a Big Rewrite, the first question from leadership should be: "What three significant improvements have you made to the current system in the last six months?" If the answer is "none," the rewrite proposal should be rejected. The team should instead be tasked with a series of surgical, high-impact improvements to the existing system. For example, can they isolate the most unstable component and stabilize it? Can they fix the three most common user complaints? Can they automate the most painful manual workaround? Each successful improvement is a step towards earning the right, and each one provides invaluable learning. A common pitfall is that engineers may see this as unglamorous "maintenance work." It is the role of leadership to frame this work as what it is: the most critical, high-stakes intelligence-gathering mission the team can undertake. It is the work that separates professional engineers from reckless architects.
Copyright Notice
This work, "AI First Principles Manifesto," is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0). To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/. Attribution: When quoting or using parts of this work, please provide clear attribution to "AI First Principles" and link back to https://aifirstprinciples.org.
References
[1]: Ransbotham, Sam, et al. "The Cultural Benefits of Artificial Intelligence in the Enterprise." MIT Sloan Management Review and Boston Consulting Group, May 2022.
[2]: Senge, Peter M. The Fifth Discipline: The Art & Practice of The Learning Organization. Doubleday/Currency, 2006.
[3]: Norman, Don. The Design of Everyday Things: Revised and Expanded Edition. Basic Books, 2013.
[4]: Wilson, Robb, and Josh Tyson. Age of Invisible Machines: A Practical Guide to Creating a Better, More Productive, and More Human World of Work. 2nd ed. Wiley, 2025.
[5]: Goldratt, Eliyahu M., and Jeff Cox. The Goal: A Process of Ongoing Improvement. North River Press, 2014.
[6]: Christensen, Clayton M., et al. "Know Your Customers' 'Jobs to Be Done'." Harvard Business Review, September 2016.
[7]: National Institute of Standards and Technology. "Artificial Intelligence Risk Management Framework (AI RMF 1.0)." NIST, Jan. 2023, https://doi.org/10.6028/NIST.AI.100-1.
[8]: Parasuraman, Raja, and Dietrich H. Manzey. "Complacency and Bias in Human Use of Automation: An Attentional Integration." Human Factors, vol. 52, no. 3, 2010, pp. 381-410.
[9]: Brooks, Frederick P., Jr. The Mythical Man-Month: Essays on Software Engineering, Anniversary Edition. Addison-Wesley Professional, 1995.
[10]: Deming, W. Edwards. Out of the Crisis. MIT Press, 2000.
[11]: Winner, Langdon. "Do Artifacts Have Politics?" Daedalus, vol. 109, no. 1, 1980, pp. 121–36.
[12]: Trist, Eric L., and Ken W. Bamforth. "Some Social and Psychological Consequences of the Longwall Method of Coal-Getting." Human Relations, vol. 4, no. 1, 1951, pp. 3-38.
[13]: Greenleaf, Robert K. Servant Leadership: A Journey into the Nature of Legitimate Power and Greatness. Paulist Press, 2002.
[14]: Beck, Kent, et al. "Manifesto for Agile Software Development." Agile Manifesto, 2001, agilemanifesto.org.
[15]: Brignull, Harry. "Dark Patterns: Deception vs. Honesty in UI Design." A List Apart, 2011.
[16]: Edmondson, Amy C. The Fearless Organization: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth. Wiley, 2018.
[17]: Rother, Mike, and John Shook. Learning to See: Value Stream Mapping to Add Value and Eliminate Muda. Lean Enterprise Institute, 2009.
[18]: Kant, Immanuel. Groundwork of the Metaphysics of Morals. 1785.
[19]: Calvo, Rafael A., and Dorian Peters. Positive Computing: Technology for Wellbeing and Human Potential. MIT Press, 2014.
[20]: Brown, Tim. "Design Thinking." Harvard Business Review, June 2008.
[21]: Liker, Jeffrey K. The Toyota Way: 14 Management Principles from the World's Greatest Manufacturer. McGraw-Hill, 2004.
[22]: von Bertalanffy, Ludwig. General System Theory: Foundations, Development, Applications. George Braziller, 1968.
[23]: Klein, Gary. Sources of Power: How People Make Decisions. MIT Press, 1998.
[24]: Weick, Karl E., and Kathleen M. Sutcliffe. Managing the Unexpected: Resilient Performance in an Age of Uncertainty. Jossey-Bass, 2007.
[25]: Sheridan, Thomas B., and William L. Verplanck. "Human and Computer Control of Undersea Teleoperators." Man-Machine Systems Laboratory, MIT, 1978. This paper is a foundational text establishing a scale for levels of automation.
[26]: Følstad, Asbjørn, and Petter Bae Brandtzæg. "Chatbots and the New World of HCI." Interactions, vol. 24, no. 4, 2017, pp. 38-42.
[27]: Mori, Masahiro. "The Uncanny Valley." Translated by Karl F. MacDorman and Norri Kageki, IEEE Robotics & Automation Magazine, vol. 19, no. 2, 2012, pp. 98-100.
[28]: Faden, Ruth R., and Tom L. Beauchamp. A History and Theory of Informed Consent. Oxford University Press, 1986.
[29]: Schweitzer, Maurice E., and Rachel Croson. "Curtailing Deception: The Impact of Direct Questions on Lies and Omissions." International Journal of Conflict Management, vol. 10, no. 3, 1999, pp. 225-248.
[30]: Shingo, Shigeo. Zero Quality Control: Source Inspection and the Poka-yoke System. Productivity Press, 1986.
[31]: Taleb, Nassim Nicholas. Antifragile: Things That Gain from Disorder. Random House, 2012.
[32]: Kim, Gene, et al. The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win. IT Revolution Press, 2013.
[33]: Boehm, Barry W. Software Engineering Economics. Prentice-Hall, 1981.
[34]: Kahneman, Daniel. Thinking, Fast and Slow. Farrar, Straus and Giroux, 2011.
[35]: McElreath, Richard. Statistical Rethinking: A Bayesian Course with Examples in R and Stan. Chapman and Hall/CRC, 2020.
[36]: Weick, Karl E. Sensemaking in Organizations. SAGE Publications, 1995.
[37]: Tufte, Edward R. The Visual Display of Quantitative Information. Graphics Press, 2001.
[38]: Ohno, Taiichi. Toyota Production System: Beyond Large-Scale Production. Productivity Press, 1988.
[39]: Musk, Elon. Interview with Kevin Rose. 2013. The "Musk Algorithm" or first principles approach involves breaking a problem down to its fundamental truths.
[40]: De Bono, Edward. Lateral Thinking: Creativity Step by Step. Harper & Row, 1970.
[41]: Hammer, Michael, and James Champy. Reengineering the Corporation: A Manifesto for Business Revolution. Harper Business, 1993.
[42]: Chesterton, G. K. The Thing: Why I Am a Catholic. Dodd, Mead & Company, 1929. The principle is often summarized as: "Do not remove a fence until you know why it was put up in the first place."
[43]: Fetterman, David M. Ethnography: Step-by-Step. SAGE Publications, 2010.
[44]: Star, Susan Leigh, and James R. Griesemer. "Institutional Ecology, 'Translations' and Boundary Objects: Amateurs and Professionals in Berkeley's Museum of Vertebrate Zoology, 1907-39." Social Studies of Science, vol. 19, no. 3, 1989, pp. 387-420.
[45]: Wardley, Simon. "An Introduction to Wardley Mapping." Medium, 2015.
[46]: Sweller, John. "Cognitive Load Theory, Learning Difficulty, and Instructional Design." Learning and Instruction, vol. 4, no. 4, 1994, pp. 295-312.
[47]: Tesler, Larry. Quoted in: "Effective UI: The Art of Building Great User Experience in Software" by Toptal Engineering Blog. The law states that every application has an inherent amount of irreducible complexity.
[48]: Hao, Karen. "Training a single AI model can emit as much carbon as five cars in their lifetimes." MIT Technology Review, June 6, 2019.
[49]: Brooks, Frederick P., Jr. "No Silver Bullet: Essence and Accidents of Software Engineering." Computer, vol. 20, no. 4, 1987, pp. 10-19.
[50]: The Standish Group. CHAOS Report. 2015. This report and subsequent versions consistently show high rates of failure or challenges for large software projects.
[51]: Kotter, John P. Leading Change. Harvard Business School Press, 1996.
[52]: Japan Management Association. Monozukuri: The Spirit of Japanese Manufacturing. 2004. Describes the philosophy of craftsmanship and continuous improvement.
Sponsor AI First Principles on Patreon
The term AI First Principles™ is a trademark and is protected under common law trademark rights.
Copyright (c) 2025 AI First Principles (aifirstprinciples.org)
AI First Principles is Licensed for public use with attribution.