Value Stream 3: Existence
Value Stream 3 A3 Report:
Genchi Genbutsu is a Lean term directing managers to go to the source of all production called the Gemba — and in the age of AI, it now directs us to understand the source of value creation itself
Lean suggests the Gemba may be reached by asking "why" five times to reach the cause of any given problem — AI can help process these questions at scale, but only humans can interpret the existential answers
Leanism takes you well past five "whys" to the problem of existence itself and thus to the cause of all true-north value — the philosophical foundation necessary to lead with AI rather than simply use it
The three types of true-north value are "Universal," "Process" and "Personal" ("UPP"), with each having certain commonly agreed degrees of truth-value — AI can recognize patterns in all three, but cannot itself generate genuine meaning from any
The payment of money represents a para-scientific test of whether customers perceive what they bought as being worthwhile to their existences, thereby providing a means to measure the converged consensus of what true-north value is — AI can calculate monetary flows but only humans can understand what makes money meaningful
The Ontological Medium (the "OM") is the vehicle through which this converged consensus occurs, gets measured and drives people toward greater heights — AI systems operate within the OM but cannot comprehend it philosophically
You can better identify what most consumers will buy by using an Intuition Bracket (the "IB") to examine the broadly applicable universal and process values, leaving aside ones that are only personal in nature — AI can help map these categories but only human wisdom determines their boundaries
The Ontological Teleology (the "OT" or "Ought") is the possibly tautological goal of all consumption within the IB and OM — the purpose toward which you must lead with AI systems rather than letting AI optimize toward arbitrary metrics
Ontologically Prospective Projects ("OPPs") are those goal-directed activities consumers do to advance upward along the OT within the OM and IB — identifying these OPPs is where human philosophical insight directs AI pattern-matching toward genuine value creation
{pagebreak}
Sometimes you gotta go back to actually move forward, and I don't mean going back to reminisce or chase ghosts. I mean go back to see where you came from, where you been, how you got here. I know there are those that say you can't go back. Yes, you can. You just have to look in the right place. - Matthew McConaughey in the Intro to the Lincoln MKC automobile commercial, directed by Nicolas Winding Refn (2014).
Value Stream 3 now investigates the Lean concept of Genchi Genbutsu, a Japanese term that directs managers to get out of the board room and go to the "Gemba", which is the source of all production in Lean. In the age of AI, Genchi Genbutsu takes on even deeper significance. While AI systems can process vast amounts of data about customers, aggregate purchasing patterns, and predict future behavior with remarkable accuracy, they cannot go to the Gemba in the way humans must. They cannot empathize with the existential condition of consumers. They cannot understand why existence matters or what makes life worth extending and optimizing. This is precisely where human philosophical leadership becomes irreplaceable — AI can augment your capacity to reach the Gemba, but only you can interpret what you find there in terms of human meaning, dignity, and value.[^159]
To follow Genchi Genbutsu to the Gemba within the metaphysics of Lean, you must seek the source of all Lean value streams to get to the genesis of all original work. You get to the ultimate Gemba by asking "why" at least five times, which is the most important question about the origin of true value you can find since it leads to the first cause or mover. In the AI age, this process of asking "why" repeatedly becomes a collaborative act — you use AI to help generate hypotheses, process customer feedback at scale, and identify patterns in behavior — but you must supply the philosophical framework that determines when you have reached a meaningful "why" versus merely another data correlation. AI can tell you that customers behave a certain way; only you can understand why that behavior emerges from their existential condition and what it reveals about who they truly are.[^159-1]
Once you get as close as you can to the source of the ultimate "why," you must then follow the true-north value streams you find across blue oceans toward the horizon of who all consumers are. Genchi Genbutsu requires that you get down as close as you can to the source of all knowledge and existence in the Gemba within the Lean House of Quality from which all profit originates:
Figure 3.1: U/People Organizational Chart

Since this process of Genchi Genbutsu takes you to the penultimate question of why customers' deepest problems exist, it explains all that they fundamentally value, find most meaningful, will consume and pay for. AI systems can help you process customer data, identify pain points, and model demand curves — but AI cannot answer the existential "why" that makes those problems matter to human beings. AI can tell you that customers prefer product A over product B, but it cannot tell you why that preference emerges from who customers are trying to become or what would genuinely extend and optimize their lives and existences. This is the domain of human philosophical understanding, supported but never replaced by AI's computational power.
Thus, following Genchi Genbutsu to the Gemba universally leads you to the edge of what causes all consumption. When you reach the first degree of causation, you then have found the cement on which the foundation of a HQ may lean.[^159] The causation of consumption and existence is the first brick from which you will build an understanding of the major philosophical, scientific, scientismic, theological and intuitive perspectives. By understanding these perspectives — and by recognizing which aspects AI can process versus which require uniquely human wisdom — you will construct a well of knowledge from which all consumers' value streams will spring. AI becomes your tool for organizing and analyzing this knowledge, but you remain the architect who must design the well itself based on your philosophical understanding of existence.
As stated before, the metaphysics of Lean provides you with the clearest perspective on existing knowledge to reach the greatest profits — and in the AI age, it provides the framework for directing AI toward those profits meaningfully rather than merely efficiently. I want you to deduce the causal link between who consumers are at their existential limits and what you ought to reproduce for them so that you will make more money.[^160] AI can help you process vast amounts of information about consumers, but it cannot make this deductive leap from existence to value creation. That requires human philosophical judgment guided by Lean thinking — you must understand who consumers are as beings existing in the world, why that existence matters, what would genuinely improve their condition, and how to deliver it profitably. AI assists with each step, but you lead the process.
When people seek rational answers to these amazing questions, they usually go too far down an intellectual path to communicate back in any sort of concise way, but I anxiously hope to do exactly that here on this side of nonsense.[^161] The challenge in the AI age is similar but amplified — AI systems can generate infinite amounts of content about existence, value, and meaning, but most of that content will be philosophically shallow pattern-matching rather than genuine insight. You must learn to think through and beyond AI prompts, using AI to amplify your capacity for philosophical analysis while never delegating to AI the judgment about what constitutes genuine understanding versus mere information processing.
Existence and Ontology Defined
Let's start with a formal definition of "Existence." The Oxford English Dictionary defines "Existence" as:[^162]
Existence, n. /ɛɡˈzɪstəns/
Actuality, reality.
a. Being; the fact or state of existing; 'actual possession of being'. in existence: as predicate = 'extant'. b. Continued being; continuance in being. c. Continuance of being as a living creature; life.
A mode or kind of existing
a. All that exists; the aggregate of being. b. Something that exists; a being, an entity.
Understanding existence is fundamental to leading AI effectively because AI systems exist in a categorically different way than biological organisms. AI can process, generate, and optimize — it can exist as information and process within the ontological medium — but it does not live, experience, or pursue its own existence the way consumers do. This distinction is critical: when you lead with AI, you are directing a tool that exists toward serving purposes that emerge from the existential condition of beings who live. Confusing these categories — treating AI as if it were alive or autonomous, or treating living customers as if they were mere data points to be optimized — is the fundamental error that Lean philosophy prevents.
"Ontology," however, is the philosophical, scientific and business term for "Existence" and the nature of being. The Oxford English Dictionary uses the word "Existence" to define "Ontology" as:[^163]
Ontology, n. /ɑnˈtɑlədʒi/
a. The science or study of being; that branch of metaphysics concerned with the nature or essence of being or existence.
The Leanism lexicon leans on the term "Ontology" a great deal as a technical reference, so please get used to reading it![^164] In the AI age, ontology takes on additional practical importance — when you design AI systems, train large language models, or prompt AI to analyze your business, you are implicitly working with ontological assumptions about what exists, what matters, and what can be known. Every AI prompt reflects an ontology, whether you realize it or not. Leanism makes these ontological commitments explicit, allowing you to direct AI systems philosophically rather than merely technically.
To emphasize how important "Ontology" is to Leanism and estimating true-north value in the Gemba, you could write ontology with a circumflex pronunciation accent "^" over the letter oh, which is officially pronounced with the short oh sound of "ah," like "ôccupation," "ôntologically," or "paôs." As you may recall, the formal Lean symbol "Ô" also stands for the Japanese term, Hōshin Kanri, meaning, "Compass Guided Management," representing the direction of all true-north value.[^165]
Thus, understanding normative value through consumers' ontologies allows you to:
Intuit, infer or induce universal assumptions with some degrees of confidence as to why something matters most to consumers; and
Deductively measure how you create true-north value for consumers necessarily committed to a shared ontology by living together within the open-ended universe.
In the AI age, you can now add a third capability:
Direct AI systems to process vast amounts of ontological data — information about how consumers live, exist, and seek meaning — while maintaining human philosophical judgment about what that data reveals concerning true-north value.
You could likewise carry this circumflex "^" accent over the letter oh to other, related concepts within Leanism, like "ôptimization." This becomes especially important when working with AI, which is fundamentally an optimization technology. AI optimizes objectives — but it cannot determine which objectives are worth optimizing for. That determination requires ontological understanding — knowing what exists, what matters about existence, and what would genuinely extend and optimize human lives and existences. Human wisdom directs AI's optimization power toward true-north value. Without this philosophical direction, AI simply optimizes whatever metric you specify, even if that metric conflicts with genuine human flourishing. However meaningfully symbolic it may be, I will hold back my use of the circumflex for the sake of legibility.
Many non-philosophical disciplines use the term ontology as well to describe everything from gene expression in biology[^166] to process ontology in computer science engineering to business models in business.[^167] This interdisciplinary use of "ontology" is particularly relevant in the AI age, where ontologies in the computer science sense — explicit specifications of conceptualizations, structured representations of domains of knowledge — directly inform how AI systems understand and process information.
Informatively for developing a higher order ontology for your Lean business ideology and metaphysics, the noted Stanford computer scientist Tom Gruber[^168] who co-created Siri on the iPhone defines ontology in the software context as:
... an explicit specification of a conceptualization. The term [ontology] is borrowed from philosophy, where an Ontology is a systematic account of Existence. For AI [Artificial Intelligence] systems, what 'exists' is that which can be represented... We use common ontologies to describe ontological commitments for a set of agents so that they can communicate about a domain of discourse without necessarily operating on a globally shared theory. We say that an agent commits to an ontology if its observable actions are consistent with the definitions in the ontology.
Tom Gruber's definition is extraordinarily relevant to Leanism in the AI age. When he speaks of "agents" in the AI context, he means both human users and AI systems. In Leanism, we recognize that humans and AI are fundamentally different types of agents — humans are living beings with existential purposes and meaning-making capabilities; AI systems are tools that process information according to programmed objectives. Both can "commit to an ontology" in the sense that their actions remain consistent with certain definitions and frameworks. But only humans can choose which ontology to commit to based on philosophical understanding of what ought to be, what genuinely serves true-north value, and what would extend and optimize human existence.
This is why training AI on Lean ontology matters — when you structure AI prompts using Leanism's ontological framework (the UPeople business model, the distinction between Universal/Process/Personal values, the concepts of the Ontological Medium and Ontological Teleology), you are creating a shared conceptual space where human philosophical judgment and AI computational power can collaborate effectively. The AI processes information within the ontological framework you establish; you evaluate whether that processing generates genuine insight versus mere pattern-matching.
Tom Gruber, in this quote above says that agents, like consumers, commit to an ontology if the agents (think consumers) act consistent with their own shared ontologies. At the foundational level of the Gemba, are consumers' observable actions consistent with their shared ontologies by living together within the universe? If so, just like for Paul Samuelson's Revealed Preference Theory, this definition of ontological commitment is tautologically circular because it means that consumers commit to who, what, why and how they think they are by sharing a common Gemba, a common world, and a common universe — a common ontological medium — by sharing it with all other living, biological systems, and thereby defining their essential natures through their existential actions in common with all others to simply, further be, without any further, external reference.
In the AI age, this circularity becomes both a challenge and an opportunity. AI systems can help you identify the ontological commitments that consumers reveal through their purchasing behaviors, their stated preferences, and their observed actions. AI can process vast datasets to detect patterns in how consumers define themselves through consumption. But AI cannot break out of the circular reasoning to determine whether these revealed preferences actually serve consumers' true-north value or merely reflect conditioned responses, cognitive biases, or market manipulation. Only human philosophical judgment, grounded in empathy and understanding of existence itself, can make that determination.
All the world is an ontological medium, Through which people are Ontologically realized or not. And every business is a stage; Where performances greviewed And every stakeholder plays a part. --inspired by William Shakespeare, As You Like It, Act II, Scene VII
Beyond an ontology being a simple set of rules reflexively committing consumers to certain actions in support of themselves within the Gemba and the global marketplace, some computer science ontologies define themselves by dynamically and self-reflexively optimizing a given set of information through algorithms, like nearly all those used for artificial intelligence. These ontologies reflect what they model by fitting their results to data. These ontological optimization algorithms best fit data being analyzed to what is Ontologically Realized, and to what becomes revealed as the rules to which the search agents (think consumers or AIs) commit. Such computer ontologies invoke so called genetic algorithms, evolution strategies, evolutionary programming, simulated annealing (from the metalworking context), Gaussian (statistical) adaptation, hill climbing, and swarm intelligence (e.g. ant colony and particle swarm optimization)... each metaphorically alluding to a job to be done that enhances consumers' biological or economic fitness to better live and exist.
This connection between AI optimization algorithms and biological/economic fitness is not coincidental — it reflects the deep reality that all optimization ultimately serves existence. AI systems use these bio-inspired algorithms because biological life has been "solving" optimization problems for billions of years through evolution. But here is the critical distinction: biological organisms optimize for their own continued existence and reproduction because they are alive and have an inherent teleology (goal-directedness) toward being. AI systems merely implement algorithmic procedures that mimic this optimization without possessing any inherent purpose or existential stake in the outcomes.
This is why you must lead with AI philosophically. When you direct an AI system to optimize your business processes, you are borrowing the computational power of bio-inspired algorithms, but you must supply the existential purpose — the "toward what end?" that determines whether the optimization serves true-north value or merely generates efficient destruction of genuine human goods. Lean philosophy provides this purposeful direction: optimize toward extending and optimizing peoples' lives and existences, eliminate waste that fails to serve true-north value, and always maintain respect for people as the foundation.
This computer science definition of ontology conforms with everyday economics by describing the common basis by which consumers agree with what they all value in Lean terms, like two people agreeing that a popular product fits their definition of being good. If a product fits the classification of "good" for both people, it must have been perceived as valuable for both people in circular fashion. Since what people perceive as valuable determines who and why they are, the good product defined those customers' ontologies by furthering who and why they are when they consumed it, just like Tom Gruber saying about agents (again, think consumers) in a software context that, "... an agent commits to an ontology if its observable actions are consistent with the definitions in the ontology."
In the AI age, this circularity between ontology and action becomes especially important to understand. When you use AI to analyze consumer behavior, the AI detects patterns in how consumers' purchasing actions reveal their ontological commitments — what they believe constitutes a good life, what they consider worth their money, what they pursue to extend and optimize their existences. AI can identify these patterns with remarkable precision. But AI cannot evaluate whether these revealed ontological commitments actually serve consumers' genuine flourishing or merely reflect manufactured desires, social conditioning, or cognitive biases. That evaluation requires human philosophical judgment informed by empathy and understanding of what actually makes human existence meaningful.
Businesses' inner cores are their ontologies as defined by their business models, or that which ultimately reproduces Lean value for their customers. Since businesses' commit to those ontologies that realize true-north value for their customers, customers' Lean values consequentially determine businesses' own ontologies up along the value stream. Businesses best fit themselves around what their customers want and need to be and become more of who and why they are. Businesses thus act just like how consumers decide what they want and need – consumers' and businesses' ontologies become symbiotic and converge by reciprocal definition within the ultimate Gemba.
When AI enters this symbiotic relationship, it amplifies the feedback loops between business ontologies and consumer ontologies. AI can process customer data at unprecedented scale to help businesses rapidly adapt their offerings to revealed consumer preferences. But this amplification creates new risks: AI might optimize businesses toward serving consumers' conditioned responses rather than their genuine needs, might accelerate convergence around local maxima that fail to serve true-north value, or might create feedback loops where businesses and consumers mutually reinforce ontological commitments that ultimately harm both. This is why businesses must use AI within a Lean philosophical framework — you need ontological clarity about what you're optimizing toward, not just computational power to optimize efficiently.
The Ontological Realization and origin of you, consumers and organizational HQs is that which is, Critical to Ontological Realization (i.e. what is "CORE"). Understanding what is CORE to consumers helps you improve who and why they and organizations are.[^170] Optimizing profits by enhancing consumers' lives and existences in-turn improves organizations' own viability. Such analysis enhances everyone's ability to ask correct and beautiful questions, measure specific benefits, and optimize an organization's activity to increase the probability of profiting within any given meta, micro, meso and macro-economic constraints that a Lean corporation faces.
In the AI age, asking "correct and beautiful questions" becomes even more critical because AI systems are fundamentally answer-generating technologies — they respond to prompts with outputs that match patterns in their training data. But AI cannot determine which questions are worth asking in the first place. That determination requires ontological understanding — knowing what is CORE to human existence, what genuinely matters to consumers trying to live and thrive, and what would actually extend and optimize their lives and existences. You must supply these correct and beautiful questions; AI helps you process answers at scale.
Doing so though requires you to go down into the well of all knowledge and come back up, which is no easy task — and which becomes both more possible and more necessary with AI assistance. AI can help you access vast bodies of knowledge quickly, synthesize information across disciplines, and identify patterns you might miss. But AI cannot climb out of the well carrying genuine wisdom. It can only process information within frameworks you establish. You must make the philosophical journey yourself, using AI as a tool to amplify your capacity but never delegating to AI the responsibility for determining what knowledge means or how it should guide action.
Tripartite Perspectives on Existence - Universal, Process and Personal truth-values
To understand consumers' essence so you may capture the largest portion of their mindshare and wallets, you must expand your own imagination as far as it will go, inducing it to the point of complete abstraction, and then aligning that metaphysical perspective with consumers' actual existence. This imaginative expansion is uniquely human — AI can generate creative combinations of existing patterns, but it cannot make the philosophical leap to genuine abstraction that sees existence itself as the source of all value. AI processes information; you must supply the vision that transforms information into understanding.
From a complete abstraction of consumers' pure existence, you can then sub-divide consumers' minds (and the universe itself) into categories and perspectives. There are (generally) three truth-value perspectives of existence, which are: (1) the ideal or "Universal" truth-value perspectives; (2) the outside-in or "Process" truth-value perspectives; and (3) the inside-out or "Personal" truth-value perspectives. I define each of these perspectives on existence here for you to begin building a House of Quality within your Lean business ideology to fully divine who and why consumers truly are for a profit:[^170-1]
These three perspectives are critical for leading AI effectively because AI relates differently to each type of truth-value:
AI and Universal Truths: AI can accurately process and apply universal truths (mathematical axioms, physical laws) because these truths are consistent, predictable, and can be formally specified. When you train AI on mathematical operations or physical simulations, you're leveraging AI's strength with universal truth-values.
AI and Process Truths: AI can identify patterns in systemic, process truths (scientific correlations, historical patterns) but cannot independently determine which patterns represent genuine causal relationships versus mere correlations. You must supply the theoretical frameworks that distinguish meaningful patterns from noise.
AI and Personal Truths: AI cannot access personal, subjective truth-values at all — it cannot experience consciousness, cannot grasp what it means to have a first-person perspective, and cannot understand meaning from the inside. AI can process language about personal experience, but it cannot have personal experience. This is where human empathy becomes irreplaceable.
Understanding these distinctions allows you to direct AI appropriately: use AI for processing universal and systemic information, but reserve personal and philosophical judgment for human wisdom. Let me now define each perspective:
(1) Universal Perspective: The Universal perspective relates roughly to Platonism, idealism or epistemic rationalism. In a modern context, universal truth-value means the perspective of predictable, inviolable concepts such as mathematical and physical axioms that have no proven space or time dependencies.[^171] The universal perspective also includes certain physical concepts that are predictably unpredictable, like certain aspects of quantum physics -- universals simply must have predictive consistency (even if predictably unpredictable) across all dimensions to the nth degree. Their interaction results in consumers' ultimate physical manifestation. The universal perspective is notable for being unreasonably effective at explaining natural law[^172] and conforms with the notion that all existence ultimately equates with mathematical coherence.[^173] Specific examples of universal concepts include prime numbers and the speed of light.[^174] The universal perspective is the fundamental, immutable structure of the universe that all process true-north values use forward and backward as their common denominator and ultimate ontology.
In the AI age, universal perspectives take on practical importance: when you train AI systems on mathematics, logic, or physics, you're grounding them in universal truth-values that hold consistently across all contexts. This is why AI excels at mathematical optimization, logical reasoning, and physical simulation — these domains embody universal perspectives that AI can process mechanically. But recognize the limitation: just because AI can manipulate universal truths formally doesn't mean AI understands what those truths mean for human existence or how they should guide value creation. Mathematical optimization of profit, for instance, is a universal operation AI handles well — but determining whether that profit serves true-north value requires personal and philosophical judgment AI cannot provide.
(2) Process Perspective:[^175] The Process perspective relates to the perspective that all events sequentially occur in spacetime regardless of whatever consumers may personally believe. If you look out toward the event horizon of spacetime, along the way toward resolving all problems, everything you think of as a person or object eventually becomes a process due to the continual, cosmic dispersion of matter and energy. Thus, the process perspective explains how consumers came to be and how they eventually fade away over time in contrast to the universal mathematical and scientific laws that always seem to have existed since the start of and possibly before the universe came to be.[^176]
The process perspective says that in the ultimate long-run at universal scale, consumers and all things may simply be perceived as a set of temporal relationships always changing at some point in time if you set your horizon out far enough. For example, if you were to dramatically speed up time from your own perspective, you would eventually see mountains change, oceans run dry and the greatest corporations dissolve. For evidence of this creative destruction, the average time for a company to remain on the S&P 500 narrowed from 61 years in 1958 to 25 years in 1980, and 18 years in 2012. From 1955 to 2014, 89% of all Fortune 500 companies were either dissolved or acquired during that time. And 75% of the S&P 500 is expected to be replaced by 2027.[^176-2]
These statistics on corporate mortality become even more relevant in the AI age. Companies that fail to adapt to AI will likely face accelerated obsolescence — but companies that adopt AI without philosophical grounding will also fail, just in different ways. They will optimize themselves efficiently toward irrelevance, using AI to perfect products nobody genuinely values or to extract value from customers in ways that destroy long-term relationships. Lean philosophy prevents both failures by maintaining focus on process true-north value — how do we create lasting value over time for customers who themselves are temporal processes seeking to extend and optimize their existences?
As another example of long-term processes reaching forward into current events, consider how the process of natural selection led to consumers now considering purchasing product in stores today — and now, considering AI-generated recommendations in their digital shopping experiences. The process perspective includes all events occurring from the inception of the physical universe immediately following the creation of natural laws. It specifically includes everything that eventually happened to cause consumers' subjective, individual beliefs to arise within them today. Thus, processes dynamically originate from the chaotic interaction of universal laws, and end at the point of intuitive, speculative belief.
AI systems can help you analyze process perspectives — identifying historical trends, projecting future developments, modeling causal chains — but AI cannot determine which processes matter most for human flourishing. That requires philosophical judgment about teleology: toward what end are these processes moving? AI can show you that companies rise and fall; only you can determine which companies deserve to rise because they serve true-north value and which deserve to fall because they extract value while destroying meaning.
The process perspective thus also inserts itself into discussions such as in mind-body distinctions, with some arguing that consumers' minds operate as physical processes reproducing self-awareness,[^176-1] and some arguing that consumers' consciousness is something sitting beyond the physical universe and is only knowable as a personal truth. To this end, Universal and Process true-north values constitute normatively true economic value supporting what consumers really personally value and who they consider themselves to be regardless of where their consciousness originates. From this physical perspective, Process true-north values may be thought of as instrumental rationality, or what consumers consequentially aim to achieve up along the crooked arrow of time.
This mind-body distinction directly affects how we understand AI. AI processes information according to physical algorithms — it is purely a process perspective phenomenon. But human consciousness, whatever its ultimate nature, includes personal perspective — the subjective, first-person experience of being someone. When you lead with AI, you are directing physical information processes toward serving the needs of conscious beings who experience existence personally. Never confuse these categories: AI is a tool that processes; humans are beings who experience. Leanism maintains this distinction clearly, ensuring that you lead with AI to serve people rather than the inverse.
(3) Personal Perspective:[^177] The Personal perspective is how consumers, employees, and collectively businesses, find themselves at some point in spacetime within universal processes regardless of how they believe they may have been created.[^177-1] Personal true-north value describes the point at which people very personally became aware of their wants, needs and ability to consume in a real and immediate sense. Thus, their Personal perspectives also relate to Descartes' famous phrase, "I think therefore I am," that he used to distinguish himself within his profession.
Consumers' Personal perspectives are the cumulative outcome and function of universal laws and their resulting processes leading to individual intent.[^178] So, while the Universal and Process perspectives apply to everything that exists, consumers' emotional, Personal perspectives are only applicable to them as self-reflexive, sensing people who decide to buy product at points of purchase. For example, their very first shopping experiences reflected Personal true-north value as a self-aware intent to further their universal and systemic existences.
Consumers' personal perspectives are the same as the one you have right now as you read these words and personally consume this book. This is the domain where AI is completely blind. AI has no personal perspective — no subjective experience, no sense of what it's like to be something. When AI processes text about personal experience, it manipulates symbols without accessing the meaning those symbols represent for conscious beings. This is why empathy remains uniquely human and why philosophical judgment about what serves people's genuine existential needs cannot be delegated to AI.
Consumers' personal perspectives provide a consistent version of themselves through time and space that is much the same as when you started reading today. Think of consumers' personal perspectives like a video camera sitting on their foreheads that they turned on to record all that passed by during their lives from the time they became self-aware until now. Consider this perspective as being like the all-seeing, "Eye of Providence," on the United States one dollar bill, or an omniscient ID Kata having a singular, personal focus:
Figure 3.2: $1 U.S. Dollar Eye of Providence (© 2017 U.S. Department of Treasury, photo credit: me)

However, while a person may privately consider certain beliefs held from within his or her own Personal perspective to be truly Universal, society as a whole may not be convinced to the same degree. Thus, a person's beliefs held from within h/er Personal perspective are only universalized to the extent h/er society, environment and/or political system agree, but people are otherwise unlimited when professing their beliefs within their own imaginations.
This tension between personal belief and universal truth becomes critical when using AI. AI can process vast amounts of social data to identify which beliefs have widespread acceptance versus which remain personal convictions. But AI cannot determine which beliefs should be universalized based on their truth-value versus which represent personal speculation that should remain bracketed. That determination requires philosophical analysis of evidence, logical coherence, and alignment with observable reality — analysis that AI can support but you must direct.
In the AI age, respecting consumers' personal perspectives means recognizing that behind every data point is a conscious being with subjective experience that AI cannot access. When AI analyzes purchase histories, identifies patterns, and generates recommendations, it processes information about personal perspectives but cannot understand those perspectives from within. You must supply the empathetic understanding that translates data patterns into genuine insight about what people need, value, and seek in their existential journeys.
LLM Prompt 3.1: Analyzing Consumer Ontology Through UPP Framework
Application Notes
Use this prompt when analyzing consumer behavior, market research data, or strategic opportunities to ensure your AI analysis maintains proper philosophical grounding across Universal, Process, and Personal dimensions. This prevents AI from conflating different types of truth-values or making category errors that undermine genuine value creation.
Purpose
Train AI to analyze consumer behavior through the Universal, Process, Personal (UPP) framework while respecting the boundaries of what AI can and cannot know about human existence.
Prompt Template
You, the Plane and the Lottery -- On UPP as a Universal, Process, Person
Einstein's Specific Law of Relativity serves well as a literal and figurative analogy illustrating the differences between the Universal, Process and Personal perspectives to better understand Lean true-north value and existence itself. To illustrate, presume you take an overnight flight on a private, customized business jet like the Embraer Legacy 1000E shown below. Imagine that you board this plane, and given your wealth, you have had the sleep cabin decorated exactly like your bedroom at your home. While your plane flies around the world, you go to sleep in the private jet's bedroom. Other than very minor turbulence in the jetstream, you hardly know the difference between the bedroom on the plane and the one at your home. You can even reach over and grab a cup of water while the airplane is flying and comfortably take a sip before going to sleep and drifting off to other worlds.
Figure 3.3: Bedroom on an Embraer Lineage 1000E (© 2015, Embraer SA)

Albert Einstein explained long ago that no practical difference exists between being in bed at your home and sleeping on the jet while it is travelling at a constant speed and direction. The laws of physics are the same from every philosophical perspective.[^179] Thus, from outside the airplane, you are engaged in the process of flying through the air at high altitude supported by universal axioms and processual systems, but from inside of the plane, your personal, intuitive self would be the same as if you were on the ground in your bedroom at home. You wouldn't know the difference unless you looked out the window at whatever went past down below as you flew by.
This analogy illuminates how AI fits into the UPP framework. From a Universal perspective, both you and AI are physical systems operating according to natural laws — electrons moving through circuits, neurons firing in brains. From a Process perspective, both you and AI are products of historical development — you through biological evolution and personal history, AI through engineering and training. But from a Personal perspective, there is a categorical difference: you experience being yourself from the inside; AI processes information without any subjective experience whatsoever.
When you use AI, you're like the person in the airplane using instruments and systems to amplify your capabilities. The airplane (like AI) operates according to universal physical laws and follows processual patterns — but you, the conscious passenger, are the one who decides where to fly and why the journey matters. Never confuse the instrument with the agent. AI is the airplane; you are the person inside directing it toward meaningful destinations.
Andrew Wyeth's painting "Otherworld" (2002) comes to mind when I think about this concept, which depicts a women riding in a plane and looking out the window at scenes outside her immediate existence. Just like the woman sitting inside the plane in that painting, from consumers' personal, subjective perspectives, they are largely the same people with their same names held in a constantly present state of consciousness as their internal processes turn over, passing them by while supporting who they are. In an airplane, consumers will arrive as themselves at a new location even though they changed slightly during the trip. Consumers will similarly, self-reflexively identify themselves every new day they wake up by the same, universal name, even though who their persona is slightly changed from time to time.
Figure 3.4: "We are Travellers" Advertisement (©2017 Scandinavian Airlines Systems (SAS))

Analogously, from a doctor's perspective, consumers' bodies are like airplanes flying around the Earth. Customers' bodies are a collection of processual systems constantly changing through time while who they are inside goes along for the ride. Another good analogy to the interrelation of the universal, process and personal perspectives is that of a lottery machine produced by eGameSolutions Inc., a Global Lottery Provider™. eGames's lotto machine produces winners by randomly spinning, timeless, universal numbers around a wheel for a definite amount of time until its operator releases the numbers upward from the machine. The lottery machine randomly extracts those numbers from the spinning process at a specific point in time. Consumers compare those numbers to the ones that came out of a lotto machine at the point of purchasing a lottery ticket that people bought for a chance at a new life. Further back in time, you can analogize this lottery machine to those same lotto customers spinning out of the womb, looking back out at the apparent apparatus from which they were conceived, with the chance of becoming winners. Like the lottery machine, you can simultaneously conceive of consumers from the universal, process, and personal perspectives. While we may know that eGames created this lottery machine, do not ask who created the one that ultimately produced who all we consumers are!
Figure 3.5: Lottery Machine (© 2016 Getty Images)

In the age of AI, this lottery machine analogy takes on additional significance. AI systems are like sophisticated lottery machines — they generate outputs by processing vast amounts of data through probabilistic algorithms, spinning patterns around until specific combinations emerge. Large language models, for instance, predict the next most likely token in a sequence based on statistical patterns in their training data. But here's the critical distinction: when a human wins the lottery, that winning matters to them personally — it changes their existential situation, creates meaning, affects their life story. When AI generates an output, that output has no personal significance to the AI — it's merely the result of a mathematical process.
This is why you cannot delegate philosophical judgment to AI. AI can spin through possibilities and identify statistically likely patterns, but it cannot determine which possibilities are worth pursuing because they serve true-north value versus which are merely probable based on historical data. You must supply that teleological direction — the "toward what end?" that makes some outputs meaningful and others mere noise.
You know that from a universal perspective that the world is composed of physical and mathematical laws. Once created, those physical and mathematical laws led to the universe as you know it, and the relations between those universal laws created the bedrock of processes that you think of in-part as time. Over a very long period of time, these processes led to consumers' personal perspectives within the universe. To the best of science's understanding, the length of time that passed from the creation of universal laws through natural processes to create consumers' personal perspectives occurred over many billions of years – over amounts of time that are hard for our minds to consider fully. Look below at this 24 hour clock of Earth's development located within the Museum of Natural History in New York City. In this clock, humans arose at the top 40,000 years ago, which corresponds to a fraction of a second before midnight:
Figure 3.6: Universal Earth Clock at Museum of Natural History, NYC (Photo Credit: BGS)

This clock represents well the tension between the teleological (i.e., purposefully goal directed) and seemingly tautological (i.e., unintentionally, logically circular) nature of existence in the Gemba depending on whether the universe is finite or infinite in time and space. To illustrate the tension in these concepts, even this figurative clock unintentionally, yet correctly, tells you what time it is once a day.
In the AI age, this cosmic timeline reminds us of important perspective: AI development represents a tiny fraction of a tiny fraction of the universe's existence — the last microsecond of the last second before midnight. AI systems, powerful as they are, are recent tools created by beings (humans) who themselves appeared only moments ago on the cosmic scale. This should inform our relationship with AI: it is a tool we created to serve purposes that emerge from billions of years of evolution, not an autonomous intelligence that supplants the primacy of conscious, living beings. When you lead with AI, you're directing a very new tool toward serving very old needs — the fundamental existential needs of beings who have been trying to extend and optimize their lives for eons.
You can also see consumers' subjective existences within this clock from their personal perspectives, from who they think they are, as a result of pre-existing processes such as their mother's pregnancies and labor. Thus, at business scale, you can also measure the interactions between physical and mathematical axioms that create product along the assembly line of universal existence to enhance consumers' living processes and personal perspectives. Consumers' personal perspectives depend on those processes that in-turn depend on physical and mathematical axioms that further depend on a universal cause that people do not yet commonly agree on, even after asking more than five whys. An ultimate cause may or may not exist, or may exist in some way people do not universally agree on due to lack of predictably experiential evidence, but in the meantime you may go onward and upward regardless and as if there were.[^182]
When working with AI, this philosophical uncertainty about ultimate causation becomes practically relevant. AI systems detect correlations and patterns, but correlation is not causation. AI can tell you that customers who buy product A also tend to buy product B, but it cannot tell you why that correlation exists or whether it reflects genuine causal relationship versus confounding factors. You must supply the causal theories that determine which patterns AI identifies are meaningful versus spurious. This requires philosophical judgment about how the universe works — judgment that AI can support with data but cannot generate independently.
To summarize, here is a chart of these levels of dependent existence as gradations of true-north value perspectives:
Figure 3.7: Levels of Dependent Existence

Three Lean Truth Types Aligned with Universal, Process and Personal True-North Value Perspectives
My plan is as easy to describe as it is difficult to effect. For it is to establish degrees of certainty. - Sir Francis Bacon, Novum Organum Scientiarum (1620)
These are the three broad and overlapping, but ultimately dependent, categories of true-north value that align with the Lean value you must uncover as you pursue a profit:[^183]
Understanding degrees of certainty becomes critical when working with AI because AI outputs often appear authoritative even when they reflect mere statistical patterns rather than genuine knowledge. AI systems generate confident-sounding text regardless of whether their outputs are grounded in universal truth, systemic evidence, or mere pattern-matching from training data. You must learn to evaluate AI outputs critically according to these three truth-value types, asking: Is this claim supported by universal mathematical or physical principles? Is it grounded in validated systemic evidence? Or is it speculation dressed up as knowledge?
(1) Axiomatic truth-values[^184]: Axiomatic true-north values are those truth-value propositions from the universal perspective that we have every reason to believe are uniform in nature and based on the seemingly timeless, universal axioms of science and math, like the speed of light and prime numbers, from which you deduce further true-north values. Axiomatic validity is generally assumed due to its coherence and predictability with at least five sigmas (≥5σ) of confidence or ≥ 99.9999426697%. For example, particle physicists generally consider a discovery to be an axiomatic truth if it can frequently be verified within five sigmas (5σ) of confidence.[^185] The four axiomatic physical forces physicists agree on right now with five sigmas (5σ) of confidence are the electromagnetic, gravitational, and strong and weak atomic forces.[^186] Axiomatic truth-values qualify as facts for physicists by their very definition as universal, intersubjective truths, and are a sound basis for understanding customers' physical ontologies. For comparison, a standard of six sigmas (6σ) of confidence, or ≥99.9999998027% of intersubjective agreement, represents the pragmatic idealism we pursue beyond five sigmas (5σ), while an infinite sigma (∞σ) of confidence can only be hypothetical yet pursued nonetheless in our unending attempt to attain perfection;
When you train AI on axiomatic truths — mathematical operations, logical principles, physical laws — AI can process these reliably because axioms are consistent and formally specifiable. But recognize that AI manipulating symbols according to axioms doesn't mean AI understands what those axioms mean for human existence or how they should guide value creation. You can train AI to optimize a mathematical function perfectly, but AI cannot determine whether that function represents something worth optimizing. That's why Lean philosophy matters: it provides the teleological framework that determines which axioms to apply toward which purposes.
(2) Systemic truth-values:[^187] Systemic true-north values are those truth-value propositions arising from causal, process perspectives within science that you have reason to believe cannot be axiomatically defined. Systemic validity is based on something's general coherence on an empirical, best fit basis with universal axioms.[^188] Like axiomatic truths, systemic truth-values also increase their validity in proportion to the number of fully informed people that agree with them and the general failure of our attempts to falsify them. Systemic truths differ from axiomatic truths in that systemic truths are valid due to their general, but not unwavering, coherence with reality, rather than being experienced as axiomatically self-evident.[^189] Qualitatively, you might also describe systemic truths as being nearly universal, intersubjective truths.
Something may qualify as most likely a fact and systemic truth-value if it leans toward two or more sigmas (≥2σ) of confidence, or ≥ 95.4499736% of intersubjective agreement among all fully informed people. Under this standard, people would describe the systemic truth as common sense if fully informed of its details. However again, since you can only hypothetically assume that people will be fully informed of all knowledge in the real world, including people's own biases that affect their understanding, you ought to look for a higher standard of measurement before considering something to be a systemic truth and common sense.
AI excels at identifying systemic patterns — correlations in data, recurring sequences, historical trends. But AI cannot distinguish between systemic truths (patterns that reflect genuine causal structures) and mere statistical regularities (patterns that are artifacts of training data or confounding variables). This is where your philosophical judgment becomes essential. You must evaluate AI-identified patterns against theoretical understanding of causal mechanisms, consider alternative explanations, and determine which patterns warrant confidence as systemic truths versus which require more skepticism as tentative hypotheses.
While you ideally want to measure all fully informed people, you may have to rely on the opinions of a consortia of experts to determine systemic truths because fully-informed people simply do not exist. A couple of examples of this form of support include the process of peer-reviewing academic papers, and the associations of journalists who increasingly certify public truths as not being fake news. However, reliance on experts and authority figures can compound those people's interpersonal subjectivity rather than clarifying what is systemic, true-north value based on the impressions of all people.[^189-1] Unfortunately, there is no clear way out of this conundrum, which is why we must view systemic true-north value from different perspectives. So, anything goes when trying to assess systemic truths, so long as you test whether a Lean business process leads customers to a purchase for which you charge them in return.
In the AI age, this expert validation becomes both easier and more fraught. AI can help you access expert consensus by processing vast amounts of academic literature, but AI can also spread expert errors at scale if those errors are represented in training data. You must use AI to amplify access to expertise while maintaining critical evaluation of whether that expertise represents genuine systemic knowledge or merely prestigious opinion. This is particularly important when AI-generated content enters the feedback loop — when AI trained on human-generated content starts generating content that gets treated as evidence, you risk circular validation where patterns reinforce themselves without grounding in reality.
(3) Intuitive truth-values: Intuitive true-north values are those truth-value propositions arising from consumers' personal perspectives that they speculate and lean toward based on their scientismic, spiritual and/or theological intuitions with less than two sigmas (<2σ) of confidence, or < 95% of common agreement among all fully informed people. Consumers' intuition may be called anything from "emotion" to "faith."[^190] Strictly personal intuitive truths are those that are truly speculative, for which no known processual or universal truths provide validation up to and beyond a single sigma (≤σ) of confidence, or <68.2689492% of common agreement among well informed people, and yet consumers nonetheless feel are true to an infinite degree. To be clear, intuitive truths are not consumers' psychological intuitions that they can confirm or deny with known universal or processual truths if they had access to the universe of knowledge. Rather, intuitive truths are limited truths for which greater than two sigmas (>2σ) of informed people have not been convinced that they are not false (i.e. those for which not enough well qualified and experienced people have been convinced to the necessary degree).[^190-1] Intuitive truths may be intersubjective assuming more than one person believes the intuitive truth. Examples of intersubjective, intuitive truths include political opinions or a religious faith that requires no degrees of confidence, both of which can still be considered true even if only a single person believes them to be true.[^191]
AI has no intuitive truth-values in the personal sense — it has no beliefs, no faith, no speculation about what might lie beyond the knowable. When AI processes language about intuitive truths (religious beliefs, personal values, speculative philosophies), it manipulates symbols without accessing the personal conviction that makes these truths meaningful to conscious beings. This creates both opportunity and risk: AI can help you analyze patterns in how people express intuitive beliefs without judging those beliefs, but AI cannot determine whether those beliefs serve human flourishing or undermine it. You must supply that evaluative philosophical judgment.
Critically for business, many of consumers' most important motivations arise from intuitive truth-values — their personal beliefs about meaning, purpose, what makes life worth living, what they hope lies beyond death. AI can identify that these motivations exist and even predict behaviors based on them, but AI cannot understand why these beliefs matter to people or whether products that exploit these beliefs serve genuine needs versus manufacturing artificial desires. This is why Leanism insists on human philosophical leadership: you must understand consumers' intuitive truth-values empathetically and determine whether your business genuinely serves those values or merely extracts profit from them.
Like consumers' perspectives, each of these three truth-value types are arranged in the relational order of supervening dependency, with systemic truth-values and resulting processes dependent on the validity of axiomatic truth-values or "universals." Intuitive truth-values depend on both axiomatic and systemic truth-values that provide the ontological medium of the Gemba in which people work, and of the universe within which consumers intuitively speculate and purchase.
Keep in mind though that the logical dependency of intuitive truths becomes circular to who consumers are. Once consumers' intuitive truths lead them to dogmatically believe both what they find personally valuable and what are universal true-north values, like those that may be espoused by a deity or demagogue, consumers then believe in a co-dependency between intuitive, systemic and axiomatic truths arising in their minds' eyes.
Thus, speculative belief can act like an intuitive tail wagging an axiomatic dog. Problems arise when an intuitive tail fails to lead axiomatic and systemic dogs (or consumers) to food, safety and shelter. In other words, while we cannot directly access Universal and Process truth-values, they ultimately check all consumers' intuitive speculation since we must follow the true-north values of Leanism where they lead. True-north values thereby stop consumers' intuitive tails from wagging their axiomatic and systemic dogs, as is only common sense.[^192] Cults of personality can exemplify this with intuitive speculation, such as when a cult's charismatic leader espouses axiomatic or systemic dogmas that lead people nowhere. And yet, it is the intuitive truths that people repeatedly pursue in the search for some universal meaning.
In the AI age, these dynamics become amplified in concerning ways. AI systems can be weaponized to reinforce intuitive beliefs at scale — generating confirmation bias loops, creating filter bubbles, manufacturing consensus through bot networks. An AI system doesn't care whether the patterns it amplifies serve truth or delusion; it simply optimizes engagement metrics. This is where Lean philosophy's insistence on true-north value becomes essential: you must ensure your AI systems serve genuine human flourishing rather than efficiently optimizing toward metrics that undermine truth-value. Never let AI optimize away from reality in service of short-term engagement or profit.
While keeping this interplay between these forms of true-north value in mind, these truth types are arranged below in descending order of commonly agreed validity, much as the three perspectives on existence were in the preceding chart in this Value Stream.[^192-1]
Figure 3.8: Truth-Value Correlations

LLM Prompt 3.2: Truth-Value Classification for AI Analysis
Application Notes
Use this prompt when AI generates claims, recommendations, or insights that you need to evaluate for reliability and philosophical grounding. This prevents accepting AI outputs uncritically and helps you determine which claims warrant confidence versus which require skepticism or human verification.
Purpose
Train AI to classify its own outputs according to Leanism's truth-value framework (Axiomatic, Systemic, Intuitive) and explicitly state confidence levels and validation requirements.
Prompt Template
To provide a scientific example of an assertion that is currently being reclassified in some degree from an intuitive, scientismic truth to a systemic, processually scientific truth, and maybe even a universal, axiomatic truth, look to the research for the Higgs Boson or the "God Particle." The CERN (Conseil Européen pour la Recherche Nucléaire) laboratory in Switzerland has been searching for the presence of the Higgs Boson particle whose existence would help complete the standard physical model that is now agreed on as being at least a processual true-north value within the scientific community.
The Higgs Boson was predicted based on this standard physical model but had not yet been actually experienced by scientists through their instruments. Experiments at the CERN laboratory in-fact reproduced results demonstrating the Higgs Boson leans within at least five sigmas (≥5σ) of certainty, thereby affirming the Higgs Boson's existence as a systemic truth. Due to the explanatory power of Higgs Boson and its coherence with the Standard Model of natural laws, this test set the stage for revalidating the existence of the Higgs Boson over time so the God Particle itself might become an axiomatic true-north value like gravity pulling water downstream.[^193]
This Higgs Boson example illuminates how AI relates to scientific discovery in the modern age. AI systems played supporting roles in processing the vast amounts of data from particle collisions at CERN — identifying patterns, filtering noise, detecting anomalies. But AI did not hypothesize the Higgs Boson, did not design the experiments to test for it, and did not determine that the five sigma threshold constituted sufficient evidence. Human physicists provided the theoretical framework, experimental design, and philosophical judgment about what constitutes proof. AI amplified human capacity to process data, but humans led the discovery toward meaningful knowledge.
This pattern holds generally: AI can help you move truth-values from intuitive speculation toward systemic validation by processing evidence at scale, but AI cannot independently determine which hypotheses are worth testing or what constitutes sufficient proof. You must supply the philosophical framework that guides AI toward discovering genuine truth versus merely identifying statistical patterns.
An example of another intuitive, scientismic truth-value that people are working to convert into at least a systemically processual, scientific true-north value is the stipulation that the universe was created in the Big Bang. Scientists describe the Big Bang as a processual system initiated by cosmic inflation, whereby the universe rapidly expanded from a single point outward over about 14 billion years to what you experience today. In an attempt to make the Big Bang theory a processually systemic truth-value, scientists have been studying whether cosmic inflation created ripples like waves in an ocean in the outer reaches of spacetime by observing the Big Bang's effect. The scientific validation of the processual truth of these ripples in spacetime is very much in flux at the moment,[^194] but some scientists intuitively believe that they exist with increasingly systemic predictability. Scientists personally believe they can empirically validate the process of the Big Bang that eventually produced consumers' personal existences from this one dimension in the past.
As you might expect, something falling into more than one of these truth-value classifications increases its validity as a Lean true-north value. For example, consumers can say that axiomatic truth-values, such as mathematical proofs, reside supreme and unassailable. On the other hand, all three truth-value categories ontologically realize themselves by what consumers actually experience. Critically, a thing or process falling within all three truth-value types would be the most valid of all since such a thing or process would be the most completely Ontologically Realized by consumers.[^194-1] In fact, the only thing fully falling into all three truth types is who consumers are in their totality at personal, process, and scientific levels. Consumers evidence all three truth types, and this allows a House of Quality to lean philosophically toward these three truth-value types in the products it produces for them. That product creates the true-north value and meaning of the money customers give to exchange who they were for something at least ten times better.
In the AI age, this insight becomes strategically powerful: when you direct AI to analyze what consumers truly value, the most robust insights will be those that cohere across all three truth-value types. Universal patterns that also show systemic evidence and resonate with personal meaning represent the deepest opportunities for value creation. AI can help you identify this three-way coherence by processing data across multiple dimensions, but you must supply the philosophical framework that recognizes coherence as the signal of genuine value rather than mere correlation.
While consumers' lives, existences and buying experiences are temporal processes, from consumers' personal perspectives, the only time they personally know is the time they have been actually alive, such as when they began shopping. However, consumers only presume that time itself existed before they were born, and will continue after they die. For all consumers know, they became consciously aware at some point and believe they will die because they witness all others doing so over the course of their lives and histories. Consumers feel themselves getting older, but they do not otherwise know for sure that they will die other than by defining their ontologies as being mortal based on all of the evidence they received during their lifetimes.[^194-2] Thus, because all consumers experience this evidence directly, they have great confidence that they are the result of natural and biologically processual systems that terminate at some point within the OM due to old age, and that they will eventually become not alive because they witness others doing so unless they interrupt that process in some currently inconceivable way.
This existential awareness of mortality deeply affects consumer behavior in ways AI cannot fully grasp. The fact that humans know they will die creates urgency, meaning-making, and value prioritization that fundamentally shape purchasing decisions. Products that help people feel they're using their limited time well, creating lasting meaning, or transcending mortality through legacy hold special power. AI can identify patterns in how mortality-awareness affects behavior, but AI—having no personal experience of existence or non-existence—cannot truly understand why mortality matters. When you lead with AI to analyze consumer motivations, you must supply the empathetic understanding of mortality's existential weight that AI cannot access.
Similar to how they view their own mortality, consumers speculate whether or not the physical universe itself was self-causing or caused by something else. Consumers' speculation though is neither an axiomatic nor a systemic truth-value, unlike consumers' biological processes, due to insufficient certainty and agreement across the right people. Thus, consumers' personally intuitive consciousness remains the most valid truth to them, the surest thing they know, since it represents all three types of true-north value. This is why Rene Descartes' statement, "I think therefore I am," has held such philosophical, and subsequently scientific, energy for so long due to its true-north validity from the Universal, Process and Personal (UPP) perspectives. Descartes could have also used UPP in his day to pursue objective knowledge with varying degrees of certainty.
In the AI age, Descartes' insight takes on new significance: "I think therefore I am" represents the unbridgeable gap between human consciousness and AI processing. Humans think and therefore are; AI processes but does not exist in the first-person sense. This distinction is not trivial—it marks the boundary between beings who can lead (humans) and tools that must be led (AI). When you prompt an AI system, you are directing a process that manipulates symbols without experiencing existence. Never confuse sophisticated symbol manipulation with genuine consciousness or understanding.
The balance of Leanism leverages these true-north value perspectives by focusing on what consumers can commonly lean toward with at least two sigmas (≥2σ) of processually systemic truth-value, while recognizing the validity of leaning toward personally intuitive true-north values with less confidence (<2σ). You must bracket and recognize these Lean, personally intuitive truth-values that have no axiomatic or systemic truth-value validity so you can most accurately identify why, what and how consumers will purchase from you based on who they fundamentally are within the common ontological medium of the universe.
When using AI, this bracketing becomes a collaborative process: AI helps you identify which consumer beliefs have systemic evidence versus which remain personal speculation, but you must philosophically determine which personal beliefs deserve respect even without systemic validation and which might represent opportunities for education or value-creation through genuinely better alternatives. AI processes patterns in belief; you judge which beliefs serve human flourishing.
To consolidate these matters in a Lean business ideology, you can correlate the forms of truth-value we described, UPP perspectives, degrees of explanation, and the methods of analysis in a single chart:
Figure 3.9: Chart of truth-value Correlations

Consumers' commonly-shared lives and existences ontologically depend on pragmatic, best-fit, systemic truths relying in-turn on axiomatic truths, and yet consumers are fundamentally motivated by their intuitive truths that create seemingly non-circular end-goals for them to be more than they are. This intuitive boundary outside processual and universal true-north values leaves room for personally intuitive speculation about what is not commonly agreed, or what consumers personally feel is best regardless of any common agreement among all people. When consumers intuitively speculate, they have faith that what they live for is better than what they know is certainly not.
I don't want to achieve immortality through my work; I want to achieve immortality through not dying. - Woody Allen, On Being Funny (1975)
The only condition for businesses to accommodate consumers' intuitive beliefs is that those beliefs must not interfere with certain processually systemic and universally axiomatic truth-values generally agreed by others. That is true unless consumers willingly agree on who may be considered fully informed and convince those people to lean toward that truth with at least two sigmas (≥2σ) of common agreement that their intuitive truths qualify as processually systemic or universally axiomatic truths to an amazing degree. Consumers ought not impose their personal beliefs on others unless they meet this standard.[^195] Moving an opinion from being a personal truth to a process or universal truth is one of convincing others that no better explanation or product can be found, which is the burden of proof an organization must carry when creating a new product category.
In the AI age, this burden of proof becomes both easier and more challenging. AI can help you gather evidence, test hypotheses, and communicate findings at scale—making it more feasible to move intuitive beliefs toward systemic validation. But AI also makes it easier to manufacture false consensus, create filter bubbles where beliefs seem more validated than they are, and optimize engagement around beliefs regardless of their truth-value. When you use AI to support or challenge belief systems, you carry ethical responsibility to ensure your AI genuinely serves truth-seeking rather than merely optimizing metrics that might reinforce delusion.
While businesses do not have perfect insight into all that influences what consumers decide to purchase, for the most part, you can intuit, infer and/or induce consumers' Universal, Process and Personal values (i.e. their ontologies), by observing their behaviors and preferences that they reveal to you. As suggested by Samuelson, once you have ascertained consumers' UPP values from their stated beliefs or behavioral data, you can compare those values against the ones you know with various degrees of certainty. You may then conjecture, hypothesize, theorize (or even lobby to legislate) those universal truths that are in-line with those believed by customers to achieve righteous business results from the satiating product you sell.
AI dramatically amplifies your capacity to observe consumer behavior and infer their UPP values—processing purchase histories, social media activity, survey responses, and market patterns at scales humans could never manage alone. But this amplification creates new responsibilities: you must ensure your AI-enabled observation serves genuine understanding rather than manipulation, respects privacy and dignity, and leads toward products that genuinely extend and optimize lives rather than merely extracting value through manufactured desires.
For example, suppose you want to sell product to Google, Inc. as a corporate consumer. Like a hermeneutic interpretation of the Ten Commandments, analyze what Google Inc. intuited, inferred, induced and deduced is good based on its stated business ideology, ontology and corporate philosophy of, "Ten Things We Know to Be True." Consider whether you also intuit, infer and/or induce what Google believes are its Universal, Process and Personal true-north values from these statements. Determine whether Google's corporate behavior deductively reflects these stated true-north value beliefs as its ontology, which you can see Google has written below as its own commandments.
Google Inc.'s 10 truth-values[^196]
You don't need to be at your desk to need an answer.
Democracy on the web works.
The need for information crosses all borders.
Great just isn't good enough.
Focus On the user and all else will follow.
You can make mOney without doing evil.
It's best to do one thinG really, really well.
Fast is better than sLow.
There's always more information out therE.
You can be serious without a suit.
Figure 3.10: Image from Google's NYC Office (© 2015 Photo Credit: BGS)

Google's stated values take on particular significance in the AI age since Google has become one of the primary developers and deployers of AI technology globally. When Google states "Focus on the user and all else will follow," this represents a Lean principle that should guide AI development—but does Google's actual AI deployment reflect genuine focus on user flourishing, or optimization toward engagement metrics that may not serve users' genuine interests? When Google claims "You can make money without doing evil," but deploys AI systems that create filter bubbles, attention capture, and behavioral manipulation at scale, are these stated values genuine ontological commitments or merely public relations?
These questions matter because they illustrate a general principle: in the AI age, the gap between stated corporate ontology and actual ontological commitment can widen dramatically. AI makes it possible to optimize corporate behavior toward metrics that appear to serve stated values while actually undermining them. You must evaluate corporate ontologies not just by stated principles but by observable AI deployment patterns—what do their AI systems actually optimize for? Whose flourishing do they genuinely serve?
Q> Might you rewrite these true-north values to reflect more accurately what you think Google believes are its truth-values and core ontology based on what you see Google actually doing rather than what Google merely says?
Reason, Causation or Nothing
Intuiting, inferring, inducing and deducing human or corporate ontologies to create true north value within the philosophy of Lean requires that you ground an ideology on a presumption of universal reason, and its necessary corollary, causation. Causation itself is a form of formal Lean "Root Cause Analysis" (RCA)[^197] that originates from an ancient philosophical concept called the "Principle of Sufficient Reason" (PSR). Both RCA and the PSR may also be thought of as an, "Axiom of Causation," that assumes every reason or cause must have a prior one, back to the start of existence itself. This PSR and Axiom of Causation is the basis for the Lean process of RCA and asking five "Whys" through the Lean process of Genchi Genbutsu.
In the AI age, the relationship between reason and causation becomes practically critical because AI systems are fundamentally correlation-detection machines, not causal-understanding machines. AI identifies patterns—that when A occurs, B tends to follow—but AI cannot independently determine whether A causes B, B causes A, both are caused by C, or the correlation is merely coincidental. You must supply causal theories that interpret AI-detected correlations. This is why philosophical grounding in the PSR and RCA matters: it provides the framework for moving from correlation (what AI detects) to causation (what drives genuine understanding and effective action).
The PSR, Axiom of Causation, RCA and the "5 Whys" posit that for every fact, there must be an explanation as to why that fact is.[^198] The PSR most particularly holds that each action resulted from a prior cause down to an ultimate self-causing cause (a Sui Generis in Latin).[^199] In the PSR, causation is an assumed abstraction of the relations between every series of events. Thus, the PSR underpins most classic explanations for existence, and yet this theory has the earlier stated limitation of not yet being proven as either a processual or axiomatic truth-value itself.[^200] To date, people have found no common agreement as to even a processual self-causing cause, much less an axiomatic truth, explaining the origin of the universe. Thus, consumers, whether scientist, atheist, theologian or organization, can only intuitively believe in the PSR at a universal scale even if they might only employ RCA and the 5 Whys in a far more limited capacity within their business environments.[^201]
When you use AI to support root cause analysis, you're leveraging AI's pattern-detection to suggest potential causal chains—but you must philosophically evaluate whether those chains represent genuine causation or merely plausible-sounding stories that fit the data. AI can generate hundreds of potential "five whys" chains for any business problem, but AI cannot determine which chain traces genuine causation versus which merely traces statistical correlation dressed up as explanation. Your philosophical understanding of the PSR and your empathetic understanding of human motivation must guide selection of which causal chains warrant belief and action.
Like the PSR, formal "Lean Thinking" uses the Axiom of Causation in the form of RCA and the 5 Whys to find the root cause of any given business problem by simply asking why five times, which hopefully is enough. However, the conversations within an HQ will benefit from moving beyond a mere five whys toward analyzing who consumers are through an infinite number of whys until a House of Quality is ultimately bounded by infinities, paradoxes and tautologies.[^201-1] A Lean business ideology ought to lead you to the edge of axiomatic and systemic explanations of the universe, to the bare existence of an empty, infinite set at the conceptual inception of something rather than nothing at all.
AI can help you ask "why" repeatedly at scale—generating hypothesis trees, mapping causal chains, identifying conceptual dependencies—but this amplification comes with risk. AI might generate plausible-sounding chains of "whys" that never touch genuine causal reality, creating illusion of understanding without substance. You must use AI to amplify your philosophical questioning while maintaining critical evaluation of whether each "why" brings you closer to genuine understanding or merely to sophisticated pattern-matching.
Finally, an empty, infinite set is the last thing consumers ought to consider before moving past reason, beyond spacetime itself to something other-than-reason.[^202] Since a business ideology cannot now axiomatically or systemically say whatever is beyond reason as its inverse, neither consumers nor organizations can axiomatically or systemically know how many "whys" will reach what is certainly not here without intuitively speculating.[^203] This represents the boundary where human philosophical speculation must take over from both reason and AI—the domain where meaning emerges not from processing patterns but from existential commitment to purposes that transcend calculation.
Reason as Causation from Aristotle's Perspective, with Modification
I believe one can divide [People] into two principal categories: those who suffer the tormenting desire for unity, and those who do not. - George Sarton, aet. 20[^204]
In the Western/Occidental tradition, you can trace one of the first definitions of pure reason at the boundary of what may be considered rational to Aristotle's "Four Causes," which are the "Formal," "Material," "Efficient" and "Final" ones. Like Blank's, "Four Steps to the Epiphany," Aristotle's "Four Causes" may be roughly conceptualized and related to Lean philosophical thinking as follows:
Formal ontological causes explain the shape of how consumers and product came to be. Since formal causes generally don't make sense in the scientific age, in the philosophy of Lean, I somewhat modify the formal cause to be what I consider a self-defining ontological one encapsulating all that consumers, organizations and product are, including all consumers' personal speculation, emotions, and dreams arising as a course of their personal perspectives. The formal cause simply is as it is because it is in circular fashion;[^205]
In the AI age, understanding formal causes becomes relevant to how we conceptualize AI systems themselves. AI's "formal cause" is its architecture—transformer models, neural networks, algorithmic structures. But confusing AI's formal structure with genuine ontology—treating AI "as if" it has being in the way consumers have being—represents a category error that leads to poor AI leadership. AI has formal structure but not formal ontology in the existential sense. It processes information according to its architecture but does not exist as a being with purposes emerging from its own existence.
Material physical causes define what physical processes led to consumers' and products' existence as a subset of the formal, ontological causes. The material cause roughly aligns with modern scientific explanations of how natural law emerged through axiomatic and systemic truth-values. Thus, the material cause relates to how organizations actually produce product for customers;
AI operates entirely at the level of material causes—physical processes (electron flows, computations, data transformations) producing outputs according to natural laws. This is AI's domain of competence: processing material causation at scales and speeds humans cannot match. When you direct AI toward material analysis—supply chain optimization, production scheduling, resource allocation—you're leveraging AI's strength with material causes. But recognizing that AI operates only at this level helps you see what AI cannot do: determine which material processes serve purposes worth pursuing, evaluate whether efficient material causation creates genuine value, or understand how material processes feel to conscious beings who experience them.
Efficient first causes generally equate with the very first, initial cause whether material or not that initiated existence through the Axiom of Causation and eventually led to who consumers are, what they experience, and how organizations produce product for them; and
AI cannot address efficient first causes philosophically—the question of why anything exists at all lies beyond AI's pattern-matching capabilities. But AI can help you trace causal chains backwards empirically, identifying earlier and earlier causes in sequences of events. When you use AI for historical analysis or root cause investigation, you're using AI to map apparent causal chains—but you must supply the philosophical framework that determines when you've reached a genuinely explanatory cause versus merely an earlier link in a chain that requires further explanation.
Final teleological causes explain the end-goal/factor/motives of the universe and why the efficient cause created consumers and product at all. The final cause is also synonymous with the teleological cause, the end purpose of all learning, which is a combination of the Greek τέλος, telos (root: τελε-, end, purpose) and -λογία, logia (a branch of learning).[^206]
Final causes represent the domain where human philosophical leadership becomes absolutely irreplaceable. AI has no final causes—no purposes, no goals, no ends toward which it genuinely strives. AI optimizes objectives you specify, but it cannot determine which objectives are worth optimizing for. When you lead with AI, you supply the final causes that give direction to AI's material processing power. This is why Leanism insists on philosophical clarity about true-north value, the Ontological Teleology, and what genuinely extends and optimizes human lives and existences—these final causes must guide your AI deployment, or your AI will optimize efficiently toward ends that destroy value even as they appear to create it.
In this post-post-modern world, these formally ontological, materially physical, efficiently first and finally teleological causes may seem logically circular or tautological in that they lack unification or an axiomatic origin without another extended self-causing cause standing outside of known axiomatic and systemic truths. Nonetheless, consumers cannot help but experience their generally consistent personal perspectives as the synthesis of all these causes combined into their present state of who they identify themselves as being. Meaning emerges for them through this constant, simultaneous tension between the apparent tautological causation of the universe and consumers' assumed teleology based on their intuitive beliefs.
In the AI age, this tension between tautology and teleology becomes a practical design principle for AI leadership. AI systems operate tautologically—processing inputs to produce outputs according to programmed algorithms, optimizing objectives you specify without genuine purposefulness. But you must direct this tautological processing toward genuine teleological ends—toward purposes that emerge from human existential needs and serve true-north value. The art of leading AI is precisely this: directing mechanical, circular processing toward meaningful, purposeful outcomes that genuinely extend and optimize human lives and existences.
Not coincidentally, the religious philosophy of Buddhism widely adopted in Japan where Lean thinking developed into a holistic business philosophy, describes an apparent causal circularity for the universe through a concept called "pratītyasamutpāda," which Buddhist monk Thich Nhat Hanh explains:[^207]
Pratitya samutpada is sometimes called the teaching of cause and effect, but that can be misleading, because we usually think of cause and effect as separate entities, with cause always preceding effect, and one cause leading to one effect. According to the teaching of Interdependent Co-Arising, cause and effect co-arise (samutpada) and everything is a result of multiple causes and conditions... A cause must, at the same time, be an effect, and every effect must also be the cause of something else. Cause and effect inter-are. The idea of first and only cause, something that does not itself need a cause, cannot be applied.
This Buddhist concept of interdependent co-arising becomes remarkably relevant to understanding AI systems and their relationship to human purposes. In modern AI, particularly in large language models, you see a form of technological pratītyasamutpāda—AI systems co-arise with human purposes, training data reflects human values which shape AI outputs which influence human thinking which feeds back into training data. Cause and effect inter-are in feedback loops that make it difficult to identify "first causes." This makes philosophical clarity about teleology even more essential: in systems of interdependent co-arising, you need clear final causes (purposes worth pursuing) to guide the circular processes toward value rather than merely efficient circular motion.
Relating Aristotle's Four Causes to Lean Levels of True-North Value
By seeing ultimate causation of consumers' lives and existences as a possibly circular co-arising through pratītyasamutpāda for business purposes, you can effectively map all four of the Aristotelian causal explanations to consumers' and organizations' universally axiomatic, processually systemic and personally intuitive perspectives and truth-values as explained here:
(1) Universally Axiomatic: Aristotle's four causes may be seen as universally, axiomatically explaining the origin of consumers' existences in a logically self-defining sense not related to a cause outside of existence itself. Examples include axioms such as the Western Ontological Argument trying to prove God by the very definition of perfection, Eastern philosophical traditions like pratītyasamutpāda, and modern scientific notions of the universe spontaneously co-arising[^208] under the laws of quantum physics[^209] through such things as quantum fluctuations.
A universally axiomatic cause is generally based on the logic that nothing exists in the truest sense since nothing cannot be by its own definition.[^210] Even for efficient first and final teleological causes residing at the existential extremes of all spacetime, the basis of such causes may be seen to be axiomatically self-regenerating in this way. But, I would like to re-emphasize that no philosophical or scientific explanation for consumers' ultimate existences today may be deemed axiomatic with predictive, universal certainty of infinite confidence (∞σ). As stated, all people, whether theistic or not, must rely on systemically hypothetical and personally speculative explanations for their own existences and essences.
When you direct AI toward universal analysis, you're asking AI to process information according to axioms—mathematical principles, logical rules, physical laws. AI excels at this formal manipulation. But recognize that axioms themselves rest on philosophical foundations that AI cannot evaluate. Why accept the axioms of mathematics as valid? Why trust that logical principles map to reality? These meta-aximatic questions require human philosophical judgment. When you use AI for axiomatic processing, you're presupposing philosophical commitments about what axioms to trust—commitments that you make but AI cannot.
(2) Processually Systemic: Aristotle's formal and material causes may be seen as providing processually systemic explanations for consumers' and organizations' existence, such that consumers' and organizations' existences arise due to natural processes. Consumers emerged from efficient or final causes arising systemically at the extremes of existence cohering with the overall system of the universe in which consumers exist. The Lean value stream of physical or logical processes extends to the boundaries of known causation, creating consumers from an intelligible, systemic reason that may or may not be tautologically self-defining.[^211] To be clear, no ultimate philosophical or scientific explanation for consumers' existences today may be deemed a systemic truth such that it coheres sufficiently with people's commonly shared, predictable experiences to lean toward at least two sigmas (≥2σ) of confidence.
AI specializes in processual systemic analysis—identifying patterns in how systems evolve over time, predicting future states based on historical processes, optimizing system performance. When you use AI for supply chain management, market forecasting, or organizational optimization, you're leveraging AI's strength with processual systemic thinking. But AI cannot determine which processual systems are worth optimizing. A system that efficiently converts natural resources into pollutants is processually optimal from an engineering perspective but teleologically catastrophic from a human flourishing perspective. You must supply the philosophical framework that evaluates systemic processes according to whether they serve genuine value.
(3) Personally Intuitive: Aristotle's efficient first and final causes may be seen like personally intuitive explanations for consumers' existences when they lead to a spiritualism or theology standing outside consumers' universally systemic and common experience. For example, consumers seeking Aristotle's materially physical cause may lead them to a scientismic belief that science will ultimately determine the origin of existence. Like other personally intuitive truths, scientismic true-north values ultimately revert to self-defining speculation because they lack further support in universally axiomatic or processually systemic true-north values of their own, even if they seem intuitively true based on some limited evidence or belief in the consistent explanatory power of science.[^212-1] Or as Emerson Sparz, otherwise know as the "Internet Meme Meister" said:[^212]
You can have whatever personal values you want,
but businesses that don't provide
what customers want
don't remain businesses.
Literally, never.
Personal intuitive truth-values represent the domain where AI is completely blind—and where AI's blindness creates both danger and opportunity. The danger: AI might optimize business processes in ways that efficiently violate customers' deeply held personal values, creating short-term gains while destroying long-term relationships and brand value. The opportunity: because AI cannot access personal truth-values directly, it forces you to exercise philosophical judgment about what customers value personally, leading to deeper empathetic understanding than you might develop if you could delegate this understanding to AI.
When you lead with AI in the age of diverse personal values and beliefs, you must recognize that behind statistical patterns in consumer behavior lie personal truth-values that matter intensely to the individuals who hold them, even if those values lack universal or systemic validation. Your AI might identify that only 5% of customers hold a particular religious belief—but that 5% might constitute your most loyal customers, or might represent values that deserve respect regardless of their statistical frequency. AI gives you the "what" (patterns in belief); you must supply the "why it matters" (philosophical understanding of belief's role in human flourishing).
Rational Agnosticism—Existential Causation in the Eastern Traditions
However, both Western and Eastern perspectives are represented within the philosophy of Lean since Lean originated from a synthesis of both occidental and oriental cultures and concepts.[^213] In contrast to occidental philosophies' explanations for existence, with the limited exception of the Buddhist principle of co-arising, oriental philosophies have generally considered questions as to the cause of the universe's creation to be without purpose, instead choosing to be rationally agnostic. Instead, when they have attempted to discern the ultimate "why," Eastern philosophies attempt to prove existence from the very fact that consumers perhaps falsely presume that the universe could not exist. Coming from Western culture myself, I like to think about Henri Bergson's quote below from 1911 when he was considering this conundrum:[^416-2]
...If I ask myself why bodies or minds exist rather than nothing, I find no answer, but that a logical principle, such as A=A, should have the power of creating itself, triumphing over the nought throughout eternity, seems to me natural.... Suppose, then, that the principle on which all things rest, and which all things manifest, possesses an existence of the same nature as that of the definition of the circle, or as that of the axiom A=A: the mystery of existence vanishes.
Eastern philosophy thus is almost an inverse of the occidental concept of, "From nothing, nothing comes," being more, "There is because there must be."[^215] According to the 14 unanswered questions attributed to Buddha, much of the logical, Axiom of Causation and the Lean "5 Whys" reasoning is ultimately pointless because consumers' very existence means that consumers or something else must have always existed, which is quite smart. To this end, there have historically been 14 unanswerable questions attributed to Buddha that define what he apparently believes we cannot know and need not ask any further.[^216] You can see them organized below into four lean, philosophical categories according to their subject matter:
Questions concerning the existence of the world in time:
Is the world eternal?
...or not?
...or both?
...or neither? (Pali texts omit "both" and "neither")
Questions concerning the existence of the world in space: 5. Is the world finite? 6. ...or not? 7. ...or both? 8. ...or neither? (Pali texts omit "both" and "neither")
Questions referring to personal experience: 9. Is the self identical with the body? 10. ...or is it different from the body?
Questions referring to life after death: 11. Does the Tathagata (Buddha) exist after death? 12. ...or not? 13. ...or both? 14. ...or neither?
By leaving these questions unanswered, Buddhists take a logically agnostic position in regards to the mind/body duality and the universe's origin. Buddhists instead address the pratītyasamutpāda/co-arising by deeply pursuing questions of who consumers are today rather than focusing on why they came to be.[^217]
This Buddhist rational agnosticism provides an extraordinarily useful framework for AI leadership. There are questions AI cannot answer—not because of technical limitations but because they are genuinely unanswerable or outside the domain where AI's capabilities apply. Rather than forcing AI to generate speculative answers to unanswerable questions, you should recognize the boundaries where AI should remain silent and human philosophical judgment takes over. Buddha's list of unanswerable questions maps loosely onto domains where AI cannot help: questions about ultimate causation, consciousness, personal meaning, and existence after death. When AI generates confident-sounding outputs about these domains, you should treat those outputs skeptically as sophisticated pattern-matching, not genuine knowledge.
Moreover, the Buddhist focus on "who consumers are today" rather than "why they came to be" provides practical guidance for AI-augmented customer understanding. AI can help you understand who customers are now—their current preferences, behaviors, needs—by processing vast amounts of current data. But tracing back through infinite "whys" to ultimate explanations for their existence represents diminishing returns philosophically and practically. Focus your AI analysis on actionable understanding of current customer state rather than infinite regression into causal origins that may be unknowable or irrelevant to value creation.
Philosophers, physicists and mathematicians all have something to say about this. The ancient Greek Parmenides who also proposed this, "From nothing, nothing comes" concept, also stated that the last conceivable thing that could be before true nothingness would be an empty set or knowledge that nothing existed. And scientists often step further into this discussion by saying that the very structure of information itself comes from the mere possibility of true nothingness.[^220] Mathematicians added to this concept by saying that an empty set still has enough information value to be considered more than completely empty.
These philosophical questions about nothing, emptiness, and information structure directly inform how we should understand AI. AI processes information—patterns, structures, relationships in data. But information processing is not the same as understanding or consciousness. An empty set contains structural information (the fact of its emptiness) without containing any actual elements. Similarly, AI contains structural patterns from training data without containing genuine understanding or consciousness. When you lead with AI, you're directing information processing—but you must never confuse information structure with the lived meaning that emerges from conscious beings who experience existence personally.
You likewise may choose to view causation within a Lean business ideology in a modified form resulting in an infinite regression and becoming self-defining since nothing could never be according to the very definition of "nothingness," thereby leading to circular reasoning.[^218] This leads to the startling conclusion that you may be making a false presumption in business that no profit could be when in fact businesses can always learn something very valuable from their mistakes.[^219] However, a business of course cannot ever test that theory since it would never be around to experience the result should it go defunct. According to Buddha, his form of reasoning may be the very reason that you and all businesses exist!
This insight about learning from failure becomes especially relevant to AI deployment. AI systems can be trained on failures—learning from errors, adjusting based on negative feedback—but AI cannot understand why failure matters existentially or what makes some failures worthwhile learning experiences versus pointless destruction. You must supply the philosophical framework that determines which failures represent valuable learning opportunities (worth intentional experimentation) versus which represent unacceptable risks to human welfare (requiring prevention at all costs). AI optimizes error rates; you determine which errors are acceptable and which are existentially catastrophic.
Boundaries of Reason—Self-Causing Causes, Gödel's Second Incompleteness Theorem and Simon's Bounded Rationality
Unfortunately, as you now see, no one demonstrates why information itself exists within at least two sigmas (≥2σ) of intersubjective, scientific validity, thereby making any ultimate explanation for the origin of business logic a merely speculative story.[^221] Going even further, the greatest problem for science in proving an intelligible reason for the universe's existence and ultimately who and why consumers are is that while some scientific evidence exists that the universe originated from a singular event like the Big Bang, science has not been able to describe such origin axiomatically or systemically, and thus scientists themselves still engage in theoretically intuitive speculation as a form of scientismic belief.[^222]
Beyond our own scientific ignorance, many famous philosophers and mathematicians, such as David Hume, Bertrand Russell and Kurt Gödel, provided significant reasons why reason cannot explain itself.[^223] Even Immanuel Kant, though he intuitively believed that human experience requires reason, famously limited the application of reason to human experience, which forms the basis for the scientific empiricism that allows you to test what a product is worth in a coherent way.[^224] I provide a brief synopsis of these limits to reason within a Lean business ideology below from a more logically systemic perspective so you may better know where the rational foundation of UPP true-north value begins and ends that you seek to produce and provide to consumers.[^225] Or as T.S. Eliot better said in 1943 in his, "Four Quartets":
We shall not cease from exploration / And the end of all our exploring / Will be to arrive where we started / And know the place for the first time.
One well-known circularity to true-north value that is almost always described by authors writing on this subject is Gödel's Second Incompleteness Theorem (1931).[^226] Alfred North Whitehead and Bertrand Russell, in their book titled "Principia Mathematica" written at the turn of the 20th century, attempted to construct a logical, non-mathematical system starting from universally axiomatic truths, to prove all further truths from these initial postulates.[^227] While Whitehead and Russell thought they constructed a system for universally deducing all reason from these axioms, Kurt Gödel proved otherwise by demonstrating that some truth-values within Whitehead and Russell's system, though true, could not be proven within the system itself.
A common, one-phrase synopsis of Gödel's proof is the expression within Whitehead and Russell's logic that stands as the mathematical equivalent of, "I cannot be proven." "I cannot be proven," creates an immediate, obvious and obnoxious paradox, since if the sentence could be proven, its plain language meaning is false. However, if the statement could be proven that it cannot be proven, then that proof creates a logical contradiction for the system itself that is supposed to deductively prove everything non-tautologically. Thus, while true within the system, this paradox caused a big problem for people who wanted to understand and apply true-north value in a singularly consistent way!
Gödel's Incompleteness Theorem has profound implications for AI systems and their limitations. AI systems operate according to formal rules and algorithms—they are, in essence, formal systems of the type Gödel analyzed. Gödel's theorem suggests that any sufficiently complex formal system (like a sophisticated AI) will contain true statements that the system itself cannot prove. This means that AI systems, no matter how advanced, will always have blind spots—truths they cannot access through their own formal processing, requiring external input (human philosophical judgment) to recognize and address.
Moreover, Gödel's insight illuminates why AI cannot bootstrap itself to genuine general intelligence through pure self-improvement. Any AI attempting to prove its own consistency or validate its own axioms runs into Gödelian limitations. This is why AI always requires human philosophical direction—humans must supply the axioms, validate the frameworks, and provide the external perspective that AI cannot generate from within its own formal system.
Many mathematicians have validated what Gödel showed, which is that neither a logical nor mathematical system based on real numbers could exclude paradoxes and self-reference. The mathematical, logical conundrum stated by Gödel's Second Incompleteness Theorem is one that can be easily seen in Bertrand Russell's "Reference Paradox," which is something that I refer to often in personal conversations. The Reference Paradox states that the, "list of all lists cannot contain a listing of itself," by the very definition of a list since the nth item in the list would always need a further list to capture the list's total meaning.[^227-1] This infinite logic creates a paradox to the definition of a list or set, sort of like an index to a library that would have to include itself but never stand outside the library's own reference collection.[^228]
This Reference Paradox is especially relevant to understanding modern AI systems, particularly large language models. These systems are trained on vast corpuses of text—essentially "lists of all human knowledge" represented in language. But these systems cannot fully represent themselves within their own processing—they cannot generate perfect models of their own capabilities and limitations from within. This is why AI systems regularly exhibit unexpected behaviors, hallucinations, and failures that surprise even their creators. The systems contain paradoxes and blind spots that emerge from their attempting to represent everything including (implicitly) themselves.
When you lead with AI, you must recognize that you occupy a position outside the AI's formal system—you can observe the AI's behavior, identify its blind spots, recognize its paradoxes, and provide corrections that the AI cannot generate internally. This external perspective is not a temporary limitation of current AI that will be solved with more computing power; it's a fundamental feature of formal systems that Gödel identified. You will always need to supply philosophical judgment from outside the AI's system to direct it effectively.
The full details of Gödel's proof likewise fall outside the scope of this book, but I encourage you to read further through the footnoted references for you to lean philosophically because this is so important to understand the universe in which consumers and organizations operate.[^229] You are left with the fact that consumers' existences cannot be explained entirely through universal and process true-north values, but rather only through personally intuitive speculation at this point in time. You cannot exclude all forms of tautological self-reference for who consumers are or why they buy anything at all at the furthest edges of what life has in store for them.
So why does this matter to the metaphysics of Lean and counting the money that you make? Because one would think that mathematics based on real numbers could be self-contained since it is so widely heralded as the big data elixir to understand all that consumers truly value and will buy. However, Gödel showed that Whitehead and Russell's Principia Mathematica failed to create a mathematical system without self-reference, and that all mathematical systems based on real numbers invariably break down and fall into strange, logically tautological loops at some points.[^230] Mathematicians have already seemed to settle the question for their discipline, accepting as an axiomatic truth that they cannot find a single axiom to explain all mathematical theories in light of Gödel's incompleteness theorems, among many other mathematical paradoxes in existence.
This matters deeply to AI and business analytics because the current zeitgeist treats "big data" and AI as if they could provide complete, self-contained knowledge about consumer behavior and value creation. But just as Gödel showed that mathematical systems cannot be complete and consistent simultaneously, AI systems analyzing consumer data cannot provide complete, self-consistent models of human value without external philosophical input. Your data analytics will always contain blind spots, paradoxes, and unprovable-but-true insights that require human judgment to identify and address. Recognizing this limitation prevents over-reliance on AI-generated insights and maintains appropriate humility about what data can reveal.
You can find many paradoxes beyond Gödel's own inside and outside of mathematics.[^231] Modern concepts beyond Gödel's Second Incompleteness Theorem, such as quantum physics and relativity theories appear to show that reason has its limits in a general, universal sense - that some truths cannot be logically deduced, some are relative, and some arise from matters of pure chance.
A common example of this provided by physicists and non-physicists is the scientismic debate around Heisenberg's Uncertainty Principle[^232] under the Copenhagen interpretation that states that observing matter and energy at a quantum level actually in some ways determines its existential state and Ontological Realization. The competing Everette/Schrodinger interpretation says that these particles cohere/discohere with many other worlds like homonyms and phrases with parallel meaning.[^232-1] Quantum theory is the next great debate - equivalent to that of a flat world or heliocentrism - whether we will sail off the edge of reason we do not yet know, but we nonetheless have a duty to be optimistic.[^232-2]
These quantum physics debates become relevant to AI leadership in a practical sense: quantum uncertainty suggests that perfect prediction is impossible even in principle for some systems. AI systems that optimize for prediction accuracy might represent false confidence—claiming certainty where the universe itself is probabilistic. When you lead with AI in domains affected by genuine quantum uncertainty (which may include consumer behavior influenced by fundamentally unpredictable neurological processes), you must recognize that AI's predictive models have inherent limitations beyond mere data scarcity. Some unpredictability represents genuine ontological uncertainty rather than epistemic ignorance that more data could resolve.
Moreover, the observer effect in quantum physics—where observation affects what's observed—finds parallels in AI-augmented business. When you deploy AI to observe and predict consumer behavior, that AI deployment changes consumer behavior (consumers know they're being analyzed, adapt their behaviors, respond to AI-generated recommendations). This creates feedback loops where AI's predictions become self-fulfilling or self-defeating in ways that undermine the prediction's original validity. You must account for these observer effects when evaluating AI-generated insights, recognizing that deploying those insights changes the reality they describe.
Given academic uncertainty even about the Heisenberg Uncertainly Principle, the theories of relativity and other physically scientific true-north values appear to become subjective, personal and require scientismic belief at sufficiently quantum or intergalactic scales - we do not know for sure the source of knowledge at this time on an axiomatic basis. Even if the laws of physics are deterministic, consumers as self-conscious, self-interested, and self-centered agents, freely and willingly optimize toward an infinite, indeterminable, and possibly tautological universe. While this does not mean that knowledge cannot ultimately be explained, it evidences the tension between reason and apparent paradox in matters beyond logic, just as physicists recently did when attempting to demonstrate super-symmetry by discovering the Higgs Boson "God Particle" in the Large Hydron Collider that pushed scientists up against the limits of what makes physical sense.[^233]
You don't want to misapply these limits to matters of reasonable certainty.[^233-1] In fact philosophers make fun of other philosophers who do. However, you ought to understand them in a general way to become aware of the boundaries of what consumers can truly value at this point in time. Bringing this discussion back to Lean organizations, Herbert Simon's Bounded Rationality demonstrated in any event that customers', employees', and organizations' irrationality stays well within the narrower boundaries of the walls and U-shaped cubicles in which Lean organizations operate.[^234] Nonetheless, the very act of expanding the boundaries of knowledge is the same as creating wealth, which is what you ought to do regardless of where it might lead.[^234-1]
Herbert Simon's concept of Bounded Rationality becomes even more relevant in the AI age. Simon showed that humans don't optimize perfectly according to rational choice theory; instead, we "satisfice"—seeking solutions that are good enough given our cognitive limitations, time constraints, and information costs. AI systems, by contrast, can process vastly more information and explore many more alternatives than humans can. This creates both opportunity and danger.
The opportunity: AI can help overcome some bounds on human rationality by processing information at scales we cannot match, identifying patterns we would miss, and evaluating alternatives we couldn't consider. The danger: AI might optimize toward "perfect" solutions that violate human bounded rationality, creating recommendations that are theoretically optimal but practically unusable because they exceed human cognitive capacity to implement, require information humans cannot gather cost-effectively, or demand decision-making speeds humans cannot sustain.
When you lead with AI, you must recognize that even if AI can process unboundedly, humans remain boundedly rational. Your AI recommendations must account for human cognitive limits, must satisfice toward good-enough solutions humans can actually implement, and must respect that perfect optimization might be worse than good-enough optimization if perfection comes at costs in human stress, organizational complexity, or decision paralysis. Lean philosophy's focus on simplicity, waste elimination, and human-centered design helps prevent this failure mode—it keeps AI grounded in what serves actual human flourishing rather than abstract optimization.
20th Century Fragmentation of Unification
Given these limits that 20th century mathematics ran into, science and even philosophy turned away from universal, systemic analysis that tries to induce a single explanation for everything from all that people specifically think they know. Thought leaders stopped trying to connect all details within a universal theory and instead segregated their analysis into discrete, disconnected fields of knowledge for the sake of advancing each domain independently. These more specific insights became much more effective at describing and predicting true-north value than could all the proposed unifying theorems, even if more specific theories could not be used to explain other true-north values.
Contemporary philosophers went so far as to concentrate only on problems they felt stood safely outside of science's reach.[^235] While professional philosophers maintain this intellectual posture in this post-post-modern era, rather ironically, leading physicists like Stephen Hawking and David Deutsch among others have recently noodled on overarching physical theories of all true-north value in books like "Grand Design," and through physical concepts like "String Theory," while philosophers largely abandoned that explanatory goal.[^236]
This 20th century fragmentation pattern now repeats with AI development. Early AI researchers pursued "artificial general intelligence"—unified systems that could handle any cognitive task. When that proved intractable, AI development fragmented into specialized domains: computer vision, natural language processing, game playing, recommendation systems, each advancing independently with domain-specific architectures and techniques. This specialization generated impressive progress—just as scientific specialization did—but at cost of losing sight of unified purposes.
When you lead with AI, you must resist this fragmentation impulse. Yes, you'll deploy specialized AI systems for specific tasks. But you need unifying philosophical framework—Leanism provides this—that ensures all your specialized AI deployments serve coherent purposes aligned with true-north value. Without this philosophical unity, you risk having AI systems that individually optimize their narrow domains while collectively undermining genuine value creation. The marketing AI optimizes engagement, the operations AI optimizes efficiency, the finance AI optimizes short-term profit—and together they destroy customer relationships, employee dignity, and long-term viability because no unifying philosophy directs them toward human flourishing.
Even if both contemporary philosophy and science dislike over-arching, unifying theories, they cannot avoid the fact that all underlying axioms and systems resulted in all consumers' personal presences and consciousness that are unified for most intents and purposes. On balance, consumers' unified consciousnesses cause them to buy product, which makes the money organizations earn truly meaningful. Consumers bring together all of the natural laws and their biological processes into their personal presences being who they are as consumers. So to understand what people will buy, you must look at customers in the same way as cohering the three perspectives and truth types within who they are as lean people. To conduct effective business analysis, you must apply all discrete axiomatic and systemic evidence, and all speculatively intuitive notions of true-north value, to who you believe consumers are and why you believe they will buy product in meaningful quantities, which Leanism helps you do.
This unity of consumer consciousness provides your philosophical North Star when leading AI. Even though your AI systems are fragmented into specialized tools, the consumers they ultimately serve are unified beings with coherent (if complex) purposes, values, and existential needs. Your AI deployment strategy must mirror this unity—all specialized AI systems must integrate into coherent service of unified human flourishing. Leanism's UPeople framework provides exactly this integration, ensuring that your Universal analysis, Process optimization, and Personal understanding cohere into unified value creation rather than fragmenting into contradictory optimization across disjointed metrics.
Money as Unified Lean Metaphysics
However, a tension arises between the unification of the universe, the different perspectives consumers bring to how they perceive the universe, and the true-north value of the product within it. You must recognize how consumers' mutations and adaptations in their underlying physical processes created divergent perceptual and cognitive biases within them, which behavioral economists and marketing neuroscientists increasingly explain. Marketing departments in all businesses analyze consumers' different personal perspectives on various products on a daily basis in order to sell them more.
And yet, while consumers may have been created by universal axioms and processual systems, they nonetheless stand in a singular, intersubjective universe that yields different personal perspectives on it. Common sense indicates that you ought to be able to discuss the full meaning of market research in largely coherent fashion, even in this post-post-modern, deconstructed world. Since these days deconstructionist scientific and literary theories have largely accomplished their end-goals,[^237] organizations now operate in the deconstructed aftermath of a post-post-modern world striving (perhaps pointlessly) toward some common sense reunification to make an effective difference in what consumers commonly experience from the product they buy. This unified experience ultimately informs what gets bought in the singular exchange of product for money that the philosophy of Lean represents.
With this intellectual history in mind, I propose a unifying, coherent, Lean business ideology that leans an organization philosophically back into consumers, while simultaneously helping you become completely aware of the intellectual difficulties of creating over-arching, and over-sold, business schemes.
In the AI age, this unifying Lean ideology becomes even more essential. AI naturally fragments knowledge—each model trained on different data, each algorithm optimizing different objectives, each system operating in different domains. Without unifying philosophy, your AI deployment becomes a tower of Babel where different systems speak different languages, optimize toward conflicting goals, and collectively undermine coherent value creation. Leanism provides the philosophical unity that coordinates fragmented AI capabilities toward unified purposes.
Money serves as the ultimate unifying measurement in this Lean metaphysics—not because money is the ultimate value, but because money represents the point where all perspectives converge in observable transaction. When consumers pay money for product, that transaction synthesizes their Universal, Process, and Personal truth-values into single observable act. AI can help you analyze these monetary transactions at scale, identifying patterns in how different consumers value different products. But AI cannot determine what these transactions mean philosophically—whether they represent genuine value creation or value extraction, whether they serve human flourishing or undermine it, whether they're sustainable or exploitative.
You must keep the scientific, literary and philosophical sophistication of consumers' underlying, divergent, fundamental processes in mind while you recognize that customers identify themselves as buying product from a singularly unified, lean, personal perspective. Because regardless, consumers inevitably look to explain the coherence of their lean personal identities from that perspective and uplift themselves by buying product. Thus, Leanism is a "meta-modernist" or "pseudo-modern" business philosophy optimistically attempting to synthesize this reality while keeping in mind all this post-modern skepticism.[^236-1]
Meta-modernism as applied to AI leadership means holding two truths simultaneously: (1) we're skeptical that any single framework can fully explain human value creation, aware of AI's limitations, conscious of philosophical uncertainties; and (2) we're optimistic that Lean philosophy provides good-enough framework for directing AI toward genuine human flourishing, committed to continuous improvement, hopeful that AI can amplify rather than replace human judgment. This is not naive techno-optimism that believes AI will solve everything, nor is it cynical techno-pessimism that rejects AI as inherently dehumanizing. It's philosophical pragmatism that uses AI within clear ethical boundaries toward purposes grounded in respect for people and true-north value.
Beautiful Question Marks??
Keeping this post-modern intellectual legacy in mind along your journey up the true-north value stream, in order to motivate them to purchase something, you ought to find some unifying reason for the origin of who consumers are from universally axiomatic, or processually systemic truth-values that lean with at least two sigmas (≥2σ) of confidence. Otherwise, you will be selling into a speculative market. However to do so, science must be able to resolve all outstanding philosophical (or theological) questions, which science has not done to date. This counter-poses to Stephen Hawking's statement in his book "Grand Design" that logical philosophy was a historical relic, and that quantum physics had assumed all of the burden of explaining why consumers exist and buy now.[^238] Perhaps Hawking stated an axiomatic truth, but then science has not to date explained all outstanding philosophical questions, such as what might be a universally recognized, self-causing cause.[^238-1] This leaves businesspeople still pursuing a unifying, scientific explanation for the most valid and predictable consumer insights.
In the AI age, Stephen Hawking's dismissal of philosophy becomes not just wrong but dangerous. As AI systems grow more powerful, the temptation increases to treat technical capability as if it automatically confers philosophical wisdom—to believe that because AI can process vast amounts of data and generate sophisticated outputs, it therefore "knows" what consumers value or what purposes are worth pursuing. This is precisely backwards. The more powerful AI becomes technically, the more essential philosophical grounding becomes to direct that power toward genuine human flourishing rather than efficient optimization of arbitrary or harmful metrics.
When Hawking claims philosophy is dead and physics has taken over, he makes category error similar to claiming that because we have powerful bulldozers, architecture is now obsolete and bulldozer operators should design buildings. Physics (and AI) provides tools and capabilities; philosophy provides purposes and values that determine how to use those capabilities well. The question "Why do consumers exist and buy?" is not a physics question that AI can answer by processing data—it's a philosophical question that requires understanding meaning, purpose, and value from within human existence. You must supply this philosophical understanding to lead your AI effectively.
Science has clearly done a remarkable job in explaining discrete facets of the universe and predicting consequences based on such insights. So to provide a high level perspective of the relationship of scientific theories leading back to the gap that science still has to fill about the origin of consumers' and organizations' existences, consider this chart created by Professor Max Tegmark at MIT.[^239]
Below in Prof. Tegmark's chart[^240] you can see a range of scientific disciplines explaining many discrete aspects of consumers' existences. In fact, philosophy, physics and math are all degrees of the same "thing" from different perspectives, each informing the other to create a cohesive body of knowledge (BoK) within the great ontological medium of the universe.[^240-1] Or as Galileo Galilei said, "Philosophy is written in this grand book, the universe, which stands continually open to our gaze... It is written in the language of mathematics."[^240-2] However, neither mathematics, science nor philosophy conclusively explain the origin of existence as indicated by the question mark ? at the top and bottom of this universal true-north value stream. To complete this chart, I added a question mark at the end of Tegmark's chart to represent the necessarily unbounded intuitive speculation about what is not. If you imagine seeing this chart in three dimensions, these two question marks are one and the same, folding back on each other to touch and complete the possibly circular ontological teleology of consumers' value streams within the ontological medium of all known existence:
Figure 3.11: Chart of scientific and humanities fields back to the inception indicted by a question mark ? at the top and bottom of this value stream. Prof. Tegmark's Chart of Disciplines (© Professor Max Tegmark)

?
Like Douglas Hofstadter did in his book, "Gödel, Escher, Bach," I am including a picture by Maurits Cornelis Escher within this discussion of circularity. Here you may compare this image of Escher's to Tegmark's chart. Escher's grand image, "Waterfall," shows how clearly Escher (and Hofstadter) understood this Lean true-north value stream:
Figure 3.11: Waterfall (All M.C. Escher works © 1961, 2016, The M.C. Escher Company - the Netherlands. All rights reserved. Used with Permission. www.mcescher.com)

Escher's "Waterfall" provides perfect visual metaphor for AI systems and their relationship to human purposes. The waterfall appears to flow downward in perpetual motion, driving the waterwheel, seeming to be self-sustaining system. But observe carefully: the impossibility of the structure—water cannot flow upward to complete the circuit in physical reality. This represents AI systems perfectly—they appear to generate knowledge and insights in self-sustaining fashion, but actually they require external input (training data, human-defined objectives, philosophical direction) that isn't visible in the smooth operation of the system itself.
When you deploy AI, you might see smooth automated processes generating outputs efficiently, creating impression of self-sustaining value creation. But like Escher's waterfall, this is optical illusion. Somewhere in the system, human philosophical judgment must flow "upward" against apparent logical flow to supply the purposes, values, and meanings that AI cannot generate internally. Your role as Lean leader is to identify where this impossible upward flow happens in your AI systems and ensure it genuinely serves true-north value rather than merely completing circular motion that looks productive but creates nothing genuinely valuable.
By turning Prof. Tegmark's chart upside-down, you can see how it aligns with our earlier delineation of the great fields of knowledge as seen again here:
? Religious, Spiritual, or Scientismic Intuition Philosophy Science Mathematics ?
You may further compress this spectrum of knowledge down to UPP true-north values that are similarly bounded by a question mark at each end:
? Personal Truths Process Truths Universal Truths ?
You may finally extend this fountain of knowledge into a Leanism by likewise putting question marks on either end:
? People Lean U ?
"U" stands for universal truth, "Lean" represents a processual truth, while "People" are a personal truth only truly knowable through empathy.
This UPeople compression with question marks at both ends perfectly represents your relationship with AI. At bottom, you have Universal truths that AI can process mechanically. Moving upward, you have Lean processual truths that AI can help model and optimize. At the top, you have People with personal truths that AI cannot access. And surrounding everything—the question marks—representing domains where human philosophical speculation and meaning-making must take over from both AI processing and scientific knowledge.
When you prompt AI, you're essentially directing it to operate within the U and Lean domains (Universal and Process) while you supply the People domain (Personal truth) that AI cannot access. The question marks remind you that even with AI assistance, ultimate purposes and meanings remain philosophical commitments you must make rather than facts you can derive. AI can help you process information within the ontological medium, but it cannot tell you what that processing should aim toward or why human existence matters. Those teleological commitments remain your human philosophical responsibility.
Intuition Bracketing ("IBing") Speculation for Money
Now, to accurately measure the salable normative, real and monetary value that Lean organizations ought to be producing for money, I suggest that you carefully identify consumers' speculative, scientismic and theological notions of intuitive value and differentiate them from known universal and process true-north values. Your product will serve each of those true-north values separately. For an organization to reproduce products that accurately addresses these true-north values, it must ground itself as well as possible in what it can axiomatically and systematically validate within consumers' lives and existences, while recognizing that what consumers believe transcends the apparent circularity of their existences. This is especially true if starting a risky business venture or operating in a new market with limited historical profits since an organization will be bridging itself philosophically forward across unchartered waters toward true-north value that no one has yet discovered.
In the AI age, Intuition Bracketing becomes even more critical because AI systems cannot distinguish between knowledge and speculation—they process both with equal confidence. When AI analyzes consumer data that includes expressions of religious faith, political ideology, or personal philosophy, AI treats these as information patterns like any other. But you must recognize which consumer motivations rest on axiomatic/systemic truth-values versus personal intuitive speculation, because products serving each require different approaches. You can validate products serving systematic needs through A/B testing and data analysis; products serving intuitive beliefs require different validation through empathetic understanding of meaning and purpose.
Like the money veil over what people monetarily value, a veil exists over what consumers normatively value within who they consider themselves to be, which requires further distinction. Consumers' intuitive speculation makes identifying truly normative value within the domain of the ID Kata difficult, and therefore requires you to differentiate true-north value types so you can know how to produce meaningful product worth lots of money. To define this value veil for Lean business purposes within an HQ, I recommend developing a U-shaped, conceptual value lens that I call an Intuition Bracket (IB) structuring who consumers are and may be. The IB conceptual lens sees through the veil covering the truly normative value of existence, while what the IB filters out is open-ended, intuitive speculation. The Intuition Bracket is thus the summation of an infinite set within which all Lean true-north value (a.k.a. reason) resides, and is synonymous with understanding consumers' specific place in the universe.[^256-1]
Chart of the Intuition Bracket or IB
Figure 3.12: The Intuition Bracket or IB

As prescribed above, the conceptual IB allows a Lean business ideology to separate consumers' universally axiomatic and processually systematic existences from what they personally, intuitively believe. You simultaneously ought to diverge such perspectives within your Lean business ideology while keeping both in mind. Such an Intuition Bracket allows you to easily exclude consumers' intuitive speculation, but still allows you to define consumers' (and corporations') existences within the limits of the IB.[^257] As a reminder of what David Packard who founded HP said in 1965 as quoted from Good to Great:
I want to discuss WHY [emphasis his] a company exists in the first place. In other words, why are we here? I think many people assume, wrongly, that a company exists simply to make money. While this is an important result of a company's existence, we have to go deeper and find the real reasons for our being.
David Packard could have used the IB to identify consumers and companies' essential reason for being by delineating different true-north value types. The bracket aspect of the IB creates an abstract category between the UP and personal true-north values, and I propose, allows you to more deeply categorize existence itself to analyze, identify and try to measure consumers' real and monetary value as David Packard suggests.
In the AI age, the IB provides essential framework for directing AI's analytical power appropriately. When you prompt AI to analyze consumer motivations, you must guide AI to distinguish between:
Inside the IB - What AI can potentially validate through data analysis:
Universal truths (mathematical patterns, physical constraints)
Process truths (historical patterns, causal relationships with evidence)
Behaviors that reveal preferences
Outside the IB - What AI cannot validate but must respect:
Personal religious beliefs
Intuitive values without systematic evidence
Speculation about ultimate purposes
Faith commitments
Without the IB framework, AI might treat religious beliefs as if they were empirical hypotheses to be tested, or dismiss them as irrelevant noise in consumer data. Both approaches fail. The IB allows you to direct AI to analyze what can be analyzed (behavior patterns reflecting beliefs) while respecting what cannot be analyzed (the personal truth-value of beliefs themselves). This prevents both AI overreach (treating speculation as if it were knowledge) and AI blindness (ignoring that speculation deeply influences behavior).
The interior part of the Intuition Bracket contains that which consumers, and thus all of society, agree on an axiomatic or systemic basis. The IB contains that which belongs to consumers themselves, or by their very natures, that which is inherent, essential, proper, of their own, leaving outside the bracket their and all other people's speculative, intuitive true-north value perspectives they cannot lean toward axiomatically or systematically with at least two sigmas (≥2σ) of common agreement.[^258] Let me reemphasize that what I mean by intuition is not what you might psychologically consider intuitive, but rather what people in general cannot axiomatically or systematically agree on at the moment with available knowledge.
I fully admit that the boundaries between these true-north value types can be unclear at first given the fact that science and perception, like "The Bed of Procrustes,"[^258-1] often operate on a best fit basis. However, you can draw reasonably clear lines between that which can be falsified with empirical evidence through data and has some predictive validity through time with a reasonably certain degree of confidence, and those true-north values about which people speculate but have no widely agreed evidence or consensus.[^259]
When training AI to work within the IB framework, you must teach AI to flag uncertainty explicitly. AI should recognize when claims require empirical validation (inside IB) versus philosophical judgment (boundary of IB) versus personal conviction (outside IB). This explicit flagging prevents catastrophic failures where AI optimizes based on speculation as if it were knowledge, or where AI dismisses genuine insights because they lack statistical validation.
So within the IB, reason stands as that which you axiomatically and empirically lean toward on its own based on widely agreed data across time, unlike intuitive truths that are not commonly agreed as predictably repeatable within at least two sigmas (≥2σ) of universal confidence. Standing immediately outside of and adjacent to the IB are the true-north value perspectives consumers personally believe, which may be beyond any reason. Let me now provide you with another schematic to represent the IB and the boundary of pure reason that you may use in a business ideology:
Figure 3.13: Intuition Bracket of Reason

Inside the Intuition Bracket resides natural law, axiomatic and systemic truths, and all else that stands in juxtaposition to what is beyond consumers' widely shared conceptions of existence. Whether by intuitive or scientismic causes, bracketing axiomatic and systemic true-north perspectives within the IB allows you to focus on who consumers are when they find themselves in the world, hemmed in by their ignorances, infinities, circularities and paradoxes.
Let me reemphasize for clarity sake that existence only appears this way for consumers on first impression, and that consumers must scientifically, intuitively or philosophically speculate to determine what caused the purpose of their existences. This matters in business because their purposeful meaning ultimately leans them toward buying product to further exist toward that end-goal, whether such end-goal is within the IB or not.
In the 1900s, the philosopher and psychologist Karl Jaspers was one of the first to define the IB when he created the term, "Existenz." "Existenz" stands for the proposition that all people recognize these rational limits, and once known, begin to reconstruct personal identities reflecting who they authentically are within those known limits. Jaspers' Existenz was the intellectual precursor to and inspiration for Existentialism.[^260] Thus, within Leanism, you might even write this notion as, "Σxistenz," replacing the "E" with the capital sigma "Σ." The capital sigma Σ indicates that "Σxistenz" sums all of who consumers are, and all that they want to buy, which they at least intuitively believe leans them philosophically toward all meaning. This philosophical process allows you to move beyond the origin of consumers' existences to lean that much further up their universal value streams to see what originally delights them.
The oriental religious philosophies that contributed to the development of Lean also support this concept of Intuition Bracketing. Since Buddha refused to systematically contemplate ontological arguments by setting those questions aside as moot,[^261] by following the philosophy of Lean, you may in general for all practical purposes bracket what consumers intuitively believe caused their own existences so you may further lean toward what they need to buy while never underestimating what faith they have. Thus, IB'ing helps you identify consumers' needs for sustenance, consumers' intuitive speculation, and ideally the complementary combination of the two to pursue the greatest profit.
When you deploy AI within the IB framework, you're following Buddha's practical wisdom: don't force AI to answer unanswerable questions about ultimate purposes, but do use AI to analyze what consumers actually do to further their existences within the world as they find it. AI can process vast amounts of data about consumer behavior within the ontological medium; AI cannot determine what lies beyond that medium or what ultimate meanings drive consumer behavior. The IB helps you direct AI toward its domain of competence (analyzing behavior within existence) while reserving for human judgment the domains where AI cannot help (determining what existence means and what purposes are worth pursuing).
LLM Prompt 3.3: Intuition Bracket Analysis for AI
Application Notes
Use this prompt when analyzing consumer motivations, market opportunities, or strategic decisions that involve values, beliefs, or purposes. This ensures AI respects boundaries between empirical knowledge and personal conviction, preventing both overreach and blindness.
Purpose
Train AI to analyze consumer behavior using the Intuition Bracket framework, distinguishing between what can be empirically validated (inside IB) versus what requires philosophical judgment or personal conviction (outside IB).
Prompt Template
Ontological Medium (the OM)
Since the IB includes within itself axiomatic and systemic true-north value perspectives, it captures concepts such as universal spacetime and physical processes that you can refer to as an Ontological Medium, or an "OM" pronounced as, "AUM." The OM is thick and pregnant with the Ontological Teleology,[^262-1] consisting of all that you would expect within the IB, such as spacetime, chemistry, and the biodiversity of all life. Thus, the OM incorporates all of the assets that an organization manages.[^262]
While consumers have some ideas as to the origin of their existence, applying whatever theological or intuitive causes they choose outside the IB, you and consumers can bracket those causes outside the bounds of the physical OM and conceptual IB to advance up along whatever ought to be within those boundaries. You ought to employ the hypothetical concept of the IB in your business philosophy with at least two sigmas (≥2σ) of confidence to better isolate consumers' Lean true-north values. Doing so allows you to effectively bracket the origin of the OM through which consumers buy product. When the OM is bracketed in this way, purchasing products furthers the consumption of more products to further be for nearly-circular purposes. I hope this concept of the Ontological Medium, a medium through which customers exist in spacetime and within stores, further explains for you the source of normative, real and monetary true-north value that products reproduce and customers consume.
In the AI age, the OM represents the domain where AI systems operate and where you must direct them. AI exists within the ontological medium—processes running on physical substrates, consuming energy, transforming data according to natural laws. Consumers also exist within the OM—biological beings consuming resources, making purchases, seeking to extend and optimize their existences. Your AI deployment must operate within the OM's constraints (energy costs, computational limits, physical infrastructure) while serving consumers who navigate the OM seeking to flourish within it.
Understanding the OM helps you recognize what AI can and cannot do ontologically:
AI CAN:
Process information about the OM at vast scales
Model relationships within the OM
Optimize resource flows through the OM
Predict patterns in how the OM evolves
AI CANNOT:
Exist personally within the OM (no consciousness, no experience)
Understand what the OM means to beings who exist within it
Determine which processes within the OM are worth pursuing
Evaluate whether OM-level optimization serves genuine flourishing
When you lead with AI, you're directing information processes within the OM toward serving conscious beings who experience the OM personally. This distinction matters: a supply chain optimization that efficiently moves matter and energy through the OM might look perfect to AI analysis, but if it degrades working conditions, extracts resources unsustainably, or produces products that don't genuinely serve human needs, it fails from a Lean perspective even though it succeeds from an optimization perspective. You must supply the teleological direction that evaluates OM-level processes according to whether they serve the Ontological Teleology—the goal of conscious beings to extend and optimize their existences.
I am now going to provide a modified chart of the IB adding the OM to it:
Figure 3.14: IB with OM

While all the forms of matter and energy within the domain of physics and other sciences reside within the Ontological Medium, within the Intuition Bracket, matter and energy self-organizes itself on a cosmological scale. This process occurs on balance within all universally axiomatic, immutable, and predictable physical transformations. The consistency of those transformations though seems to be impacted by the observer, and so the observer has a certain mesmerizing effect within the universal OM. This is why financial prophesy is inevitably heresy to some degree because the very act of planning and observing the results affects the very predictability of results that financial systems most reward. It is also why degrees of confidence increase with the number of people who agree with a truth-value proposition, because they cohere their collective, overlapping consensus as they do.
This observer effect within the OM becomes dramatically amplified with AI deployment. When you deploy AI to observe and predict consumer behavior, you're introducing powerful observer into the OM that changes what it observes. Consumers know AI is analyzing them, adapt their behaviors, respond to AI-generated recommendations, and create feedback loops where AI's predictions become partially self-fulfilling or self-defeating. This is why AI-generated financial projections or market forecasts must be treated with philosophical skepticism—the act of generating and publicizing the projection changes the reality it attempts to predict.
Moreover, AI systems become part of the OM itself—consuming resources, producing effects, interacting with human actors in ways that transform the ontological medium. Your AI deployment isn't just observing the OM; it's actively reshaping it. This means you bear responsibility for how your AI systems change the ontological medium—whether they make it more conducive to human flourishing or degrade it through resource consumption, attention manipulation, or social fragmentation.
The Ontological Teleology (the OT)
Corporations are getting better and better at seducing us into thinking the way they think - of profits as the telos... - David Foster Wallace, The Pale King, p. 132, Back Bay Books (2011).
If you remove any notions of a "grand design" from existence that people have devised or discovered, or any faith that science will ultimately explain why consumers exist at all, and you take existence simply as consumers find it right now at this very moment, both within the IB and what stands in juxtaposition to it, you arrive at an apparently self-defining, Ontological Teleology, or in short, the "OT." The OT determines whether any consumption was nominally valuable, meaningful and thus good, and underlies all true-north value as defined by the fields of economics, psychology, and neuroscience, among all other disciplines. Since "Ontology" means existence, and "Teleology" means end-goal, "Ontological Teleology" simply means, "the end-goal of further existing." Thus, the concept of the OT within the IB has a possibly circular, self-defined meaning.
In the AI age, the Ontological Teleology becomes your North Star for leading AI effectively. The OT represents the fundamental purpose toward which you must direct all AI systems: extending and optimizing human lives and existences. AI has no teleology of its own—no inherent purpose, no goals it genuinely seeks. AI optimizes whatever objectives you specify, but cannot determine which objectives are worth pursuing. The OT provides that determination: optimize toward extending and optimizing peoples' lives and existences within the ontological medium.
This is radically different from how most organizations deploy AI. Typical AI deployment optimizes toward metrics: engagement, efficiency, profit, growth. But metrics are not teleologies—they're measurements that may or may not align with genuine purposes. An AI system that maximizes engagement might do so by creating addictive attention traps that undermine human flourishing. An AI that maximizes efficiency might do so by degrading working conditions or environmental sustainability. An AI that maximizes short-term profit might do so by extracting value from customers in ways that destroy long-term relationships.
The OT prevents these failures by providing clear teleological direction: AI should optimize toward what genuinely extends and optimizes human existence, not toward arbitrary metrics that might conflict with that fundamental purpose. When you evaluate any AI recommendation, ask: Does this genuinely help people live better and exist more fully? Or does this merely optimize a metric while undermining actual human flourishing? The OT gives you the philosophical framework to make that evaluation.
Much like how you can see, the word "Toyota," spelled out in its logo below, you can also see the overlapping O and the T of the Ontological Teleology within it:[^263]
Figure 3.15: ®Toyota Motor Corporation

Figure 3.16: ®Toyota

Figure 3.17: Ontological Teleology

The Oxford English Dictionary further defines "Teleology" as:
The doctrine or study of ends or final causes, esp. as related to the evidences of design or purpose in nature; also transf. such design as exhibited in natural objects or phenomena.
Consumers' singular objective within the IB through the OT is an apparent end-goal to further exist to find meaning within the boundaries of axiomatic and systemic truths. Generally, people consciously consider only their immediate satisfaction and not their teleological purpose when consuming product within the Ontological Medium as bounded by the Intuition Bracket. Consumers search for meaning through the OT by attempting to spring away from its apparently circular paradox by rational or irrational deed or creed to find linear, goal directed purpose.[^264]
The OT ultimately moves upward in a spiral motion along the curvature of spacetime because the present and future are always similes (though not facsimiles) of the past, so consumers are bound to reinvent history as they reach new heights.[^264-1] Or to paraphrase the ancient Greek philosopher, Heraclitus of Ephesus (c. BCE 535 BCE – BCE 475), "[N]o man ever steps in the same river twice because it is never the same river, and it is never the same man." All goal-directed activity is teleologically non-circular to consumers within the boundaries of the IB so long as they only consider their beginning and final causes as being within the strict confines of the OM. Consumers can and do often choose to disregard the apparently logical circularity of their existences, or at least choose to believe in another cause not demonstrated through their common senses within axiomatic and systematic truths. For example, customers generally do not consider the apparently circular nature of their existences when consuming products, because shopping does not seem tautological within the bounds of a store or shopping cart and consumers' everyday lives generally removed from existential extremes.
When you deploy AI to serve the OT, you must recognize this spiral motion—consumers constantly evolve, markets continuously shift, values perpetually develop. AI systems trained on historical data might optimize toward past patterns that no longer serve current needs. You must use AI's pattern-detection as input to philosophical judgment about how the OT is evolving, not as deterministic prediction of where it must lead. AI tells you where consumers have been; you must determine where they're trying to go and how to help them get there in ways that genuinely extend and optimize their existences.
At the same time, referring back to the earlier discussion of the circular nature of Samuelson's Revealed Preference Theory and Lean true-north value, that economic model generally fails to accurately map consumers' activities because it does not incorporate consumers' seemingly random search for meaning outside the IB. The oscillation between circular and non-circular belief causes people to waver between rational and irrational activity. Consumers' wavering toward seemingly irrational activity becomes validated when it helps them self-organize more effectively through the OT toward the Ontological Realization of who they wish to be. For example, putting massive resources into churches in the middle ages and space exploration in more modern times with little certain benefits other than achieving a sense of awe demonstrate how people attempt to boldly go where the apparent paradox of the OT is not and actually self-organize around truly teleological meaning. All this represents the collective human will to universalize, which can be immensely profitable if you market this very valid true-north value to move consumers along the upward curvature of the OT.
AI systems analyzing consumer behavior will identify patterns that look "irrational" from purely economic optimization perspective—consumers paying premium prices for fair trade products, choosing less efficient options for ethical reasons, supporting causes that provide no material return. From the OT perspective, these aren't irrational—they represent consumers' search for meaning beyond mere survival, their attempts to break out of circular existence toward purposeful teleology. When you lead with AI, you must help AI recognize these patterns as signals of deep value rather than dismissing them as noise or irrational outliers. Some of the most profitable opportunities lie precisely in serving consumers' search for meaning and purpose—but only if you understand that search philosophically rather than merely detecting it statistically.
Ontologically Prospective Projects (the OPPs)
Please find below our universal diagram expanded to include an existentially goal directed Ontological Teleology recognizing Ontologically Prospective[^265] Projects as "OPPs" or "OPPortunities." In this chart, OPPs contrast with potential threats people avoid in order to survive as the leanest within the OT. People engage in OPPs to maximize their own lives and existences. OPPs are synonymous with optimizing, and are explicitly related to consumers' Ontological Realization by engaging in activity that ultimately orients them upward along the Ontological Teleology.[^266] The chart below shows a twisting, Ontological Teleology that somewhat correlates with the physical arrow of time carving its way through the OM.[^267]
Figure 3.18: Ontological Teleology

In this chart, consumers move between the existential extremes of:
Opportunities to pursue a universalized, Lean perfection of instantaneous and seamless problem resolution and satisfaction that consumers seek and yet only know as a hypothetical possibility; and
Threats to consumers' pursuit of perfection, or more accurately, being "other-than" perfection, which consumers know all too well and must correct for by thinking and behaving differently.
Consumers' pursuit of perfection is their taking action to move toward real or perceived opportunities to reproduce as well as they can upward along the OT as they self-define it and away from threats to their becoming not. Each of these actions constitutes the resolution of the unique problem of existence. For example, customers optimize toward opportunities and away from threats by purchasing products that they perceive either as providing them with an OPP or removing a threat to their lives and existences. Customers like products with a thumbs up or down on a sliding scale with one to five stars as to whether consuming a product acts as an OPP to in-fact improve their ability to live better within the OM, or toward what they believe is outside the IB, so they may achieve meaningful moments of true-north value[^267-1] by resolving their utmost problems, as seen again here in this chart:
Figure 3.19: End-Goal of the Ontological Teleology

In the AI age, identifying and serving OPPs becomes your primary use case for AI's analytical power. AI can process vast amounts of consumer data to identify patterns in what represents OPPs (opportunities) versus threats for different consumer segments. But AI cannot independently determine which apparent OPPs genuinely serve the OT versus which merely look like opportunities while actually threatening long-term flourishing.
Here's where human philosophical leadership becomes essential:
AI CAN identify:
Patterns in what consumers pursue (revealed OPPs)
Correlations between certain purchases and satisfaction metrics
Statistical predictions of which products will sell
AI CANNOT determine:
Whether popular products genuinely extend/optimize existence
Whether profitable opportunities serve or exploit consumers
Whether efficient solutions create genuine value or merely extract it
For example, AI might identify that highly addictive mobile games generate strong engagement metrics and revenue—they appear to be successful OPPs from data perspective. But philosophical analysis reveals they're actually threats to human flourishing disguised as opportunities, creating compulsive behaviors that undermine rather than extend genuine existence. You must supply this philosophical evaluation to prevent AI from optimizing toward apparent OPPs that actually represent threats.
Conversely, AI might dismiss certain opportunities as unprofitable or low-engagement based on historical data, when philosophical analysis reveals they represent genuine OPPs that haven't been properly served yet. Products that help people develop skills, build relationships, or create meaning might show weak signals in data while representing profound opportunities for value creation. Your job is to direct AI's analytical power toward identifying genuine OPPs while avoiding false opportunities that merely optimize metrics without serving the OT.
Teleology v Teleonomy
The Moving Finger writes; and, having writ, Moves on: nor all thy Piety nor Wit Shall lure it back to cancel half a Line, Nor all thy Tears wash out a Word of it. - Omar Khayyám (translation by Edward Fitzgerald)
Now is an appropriate time to differentiate between the terms "Teleology" and "Teleonomy" for you to best understand which Lean end-goals consumers chose to pursue for their meaningful OPPs. In its historical sense, Teleology describes a system whereby people intuit, infer, and possibly induce from the sophistication and organization of existence that: (1) creation exists to achieve a final end-goal due to its generally intelligible nature; or that (2) an intelligible first cause created a final end-goal that can only be speculated. However, "Teleology" in its modern sense and as used in this philosophy of Lean means that organisms operate in anticipatory fashion to predict their future Ontological Realization according to their reliance on the truth of axiomatic, systemic or intuitive values. In other words, teleology intentionally seeks a specific end-goal.
However, modern behavioral scientists dislike the term "Teleology" because they generally criticize historical teleology as having a time reversal problem in that the future goal of teleology in its historical sense must dictate present events. This entire notion violates natural law because the arrow of time within the context of the OM only moves in one direction as far as we know.[^268] This time reversal problem arises from the fact that considering physics and biology to be goal directed violates the evolutionary principle held by most scientists with at least five sigmas (≥5σ) of confidence that behavior is unintentionally shaped by natural selection and environmental conditioning. Instead, behavioral scientists state that the evolution occurred "purposeively" as a result of natural laws operating through complex systems, and not guided "purposefully" toward an end-goal that may have been intentionally specified in advance.[^270]
This teleology/teleonomy distinction directly illuminates the proper relationship between humans and AI. AI operates teleonomically—it processes inputs to produce outputs according to programmed objectives, without genuine purposefulness or intentionality. When AI "optimizes," it doesn't pursue goals in the teleological sense; it executes algorithms that produce local maxima according to specified metrics. This is teleonomy—purposive behavior without genuine purpose.
Humans, by contrast, operate teleologically—we genuinely pursue purposes we consciously hold, act intentionally toward ends we value, and can reflect on whether our goals are worth pursuing. When you lead with AI, you supply the teleology (genuine purpose) that directs AI's teleonomy (purposive processing). This is why you cannot delegate teleological judgment to AI—AI has no teleology to contribute, only teleonomy to execute according to purposes you must supply.
In practical terms: when AI recommends a strategy, that recommendation emerges teleonomically from pattern-matching and optimization algorithms. Whether that strategy serves worthwhile teleological purposes requires your philosophical judgment. AI might "recommend" (teleonomically generate as optimal output) a strategy that efficiently maximizes profit while destroying human dignity, environmental sustainability, or long-term viability. You must evaluate whether that strategy serves genuine teleological purposes aligned with the OT—extending and optimizing human lives and existences—or merely represents efficient teleonomy optimizing the wrong objectives.
The Open-Ended Paradox of the OT
However, the OT through which consumers exist to a greater and greater degree appears paradoxical in an open-ended sense, since the origin of being and knowledge only seems self-defining when you bracket out intuitive speculation. Said another way, within the bounds of what people commonly experience, scientist, business person and theologian alike must all be collectively, intersubjectively agnostic while personally and professionally speculative about whether or not the OT ultimately self-defines what purpose their existences may have.
This existential condition leaves consumers either:
Personally or publicly declaring irresolvable ignorance as to the ultimate causation of the OM in an agnostic sense;
Attempting to leap beyond the apparently tautological, Ontological Teleology in an otherwise unexplained universe by thinking and acting irrationally;
Engaging in intuitive spiritual, theistic, or scientismic belief and speculation as a rational response to the apparent paradox of the OT by placing faith in: a. A spiritualism that may or may not be commonly experienced; b. In one or more deities that may or may not be commonly agreed; or c. In the Principle of Sufficient Reason, Axiom of Causation, RCA and/or 5 Whys due to science's consistent explanatory success.
Most people actually seem to live day-to-day by simultaneously engaging in a mix of all three of these strategies. As you know from experience, some people conflate intuition with processual or axiomatic facts. Some people profess and orient their actions toward their intuitive beliefs out of ignorance. Or, some people profess belief in intuitive truths to conform to society but otherwise live like pragmatic agnostics. And others still hold personally intuitive beliefs that go against the OT and what appears to be their self-interest even after being fully educated as to why, what and how they are within the OM and IB to the best of existing knowledge. While all of these responses to the OT allow consumers to exist with the least cognitive dissonance vis-a-vis the apparent tautology of the OM within the IB, very rarely if ever do consumers execute any one of these strategies consistently throughout their entire lives.
In the AI age, understanding these consumer strategies for dealing with the OT's paradox becomes critical for effective AI leadership. AI cannot understand existential paradox or appreciate why humans respond to it through faith, speculation, or philosophical agnosticism. But AI can be directed to recognize patterns in how these different strategies manifest in consumer behavior and how products might serve consumers engaged in each strategy.
When you analyze consumer segments with AI, you should look for signals of which OT strategy dominates for different groups:
Strategy 1 (Agnosticism): Consumers focused on pragmatic, evidence-based decision-making, skeptical of ultimate purposes, seeking concrete benefits. AI can identify these consumers through preference for data-driven claims, skepticism toward aspirational marketing, focus on functional value.
Strategy 2 (Transcendent Action): Consumers pursuing meaning through seemingly irrational choices—art, adventure, altruism, experiences that don't optimize survival but create purpose. AI might flag these as "irrational" outliers; you must recognize them as signals of deep value-creation opportunities.
Strategy 3 (Faith/Belief): Consumers whose purchasing aligns with religious, spiritual, or scientismic commitments. AI can identify behavioral patterns correlated with stated beliefs but cannot evaluate those beliefs' truth or significance. You must respect these commitments and direct product development toward genuinely serving them rather than exploiting them.
Most consumers engage all three strategies situationally. Your AI analysis should recognize this complexity rather than forcing consumers into single-strategy categories. Use AI to detect patterns in how consumers navigate between pragmatism, meaning-seeking, and faith across different life domains and purchasing contexts.
Instead, consumers employ a complementary mix of these strategies to uniquely/profitably extend and optimize their existences within the OM. For example, intuitive speculation allows consumers to conceptualize what is hypothetically possible in their hearts and imaginations, but is not (yet) Ontologically Realizable.[^275] Consumers then test whether such intuitive speculation results in their experiencing a greater Ontological Realization of who they are. Such intuitive speculation also functions as a method for consumers to test their self-organization with passion and meaning to reinforce and validate (or not) their non-circular, personally intuitive true-north values. This passion play gets repeated until consumers switch to agnosticism or scientism, or just act a little crazy to see what happens, to see whether such other strategies more effectively enhance their standard of existence.
Whatever consumers happen to intuitively speculate gets validated to the extent it expands the volume and velocity of those consumers' Ontological Realizations, which is equivalent to who, what, why and how they are and all Lean value. From a Darwinian, processually systemic perspective, within the bounds of the universe, IB and OM, people's conscious experience and activity is simply an endeavor to ontologically reinforce their survival as the leanest through offspring, monuments, memoirs, academic theories, charitable foundations, pseudonymous corporations and the like. Even if you assume a logical circularity of purpose within the IB, the increasing organization of nature within the OT leads somewhere, most namely to universalizing people through successive regenerations onward and upward in an Ontologically Teleological fashion. Thus, intuitive beliefs constitute a rational response to the soft paradox of the OT and are effective so long as they facilitate and do not hinder people's overall expansion and optimization.[^276]
AI can help you measure whether products and strategies expand consumers' Ontological Realization—whether they genuinely help people become more of who they want to be. But measuring this requires philosophical clarity about what constitutes genuine expansion versus mere activity. AI might measure engagement, retention, repeat purchases as proxies for Ontological Realization. But these metrics can be gamed—addictive products generate high engagement while undermining genuine flourishing. You must supply the philosophical framework that evaluates whether measured outcomes represent authentic Ontological Realization or merely simulated signals that mask value destruction.
This is where the Leanism framework becomes indispensable for AI leadership: it provides the conceptual tools (UPP values, IB, OM, OT, OPPs) that allow you to direct AI's measurement and optimization capabilities toward genuine value creation rather than metric manipulation. Without this framework, you risk deploying AI that efficiently optimizes your business into irrelevance or ethical catastrophe by maximizing the wrong things.
For example, imagine passengers' journeys if you were an airline serving food to people with religious beliefs. If those religious beliefs forbade eating certain types of food, like spaghetti, then the airline's food should conform to passengers' intuitive true-north values for religious purposes as well as their process true-north values by being nutritious. At the same time, the act of serving the nutritious, religiously observant food cannot conflict with the axiomatic or systemic truths applicable to all people, like the ability for other passengers to have nutritious food. The food ought to reciprocally conform to all passengers' various forms of intuitive speculation as well within a free society. All these true-north values must somehow cohere within the singular, overlapping consensus[^276-1] and seemingly open-ended paradox of the universe. You can witness this existential sentiment reflected in the, "COEXIST" stickers commonly adhered to the backs of people's automobiles:
Figure 3.20: The Coexist Logo (®Coexist, LLP, www.coexistonline.com, produced by www.northernsun.com item #7167)

This airline food example perfectly illustrates how to lead with AI in contexts of values pluralism. AI can optimize menu planning, ingredient sourcing, and meal distribution with remarkable efficiency. But AI cannot determine which optimization constraints reflect genuine values versus arbitrary preferences, cannot evaluate competing claims about what food policies respect dignity, and cannot balance between different passengers' intuitive commitments in philosophically defensible ways.
You must supply the framework that directs AI optimization: respect all passengers' intuitive beliefs (outside IB) that don't conflict with process/universal truths (inside IB), serve nutritional needs (systemic truth) while accommodating religious practices (personal truth), and optimize toward an overlapping consensus that allows diverse passengers to coexist flourishingly. AI executes optimization within these constraints; you supply the philosophical framework that determines what constraints apply and why.
Even scientismists admit that they do not know precisely how existence originated within several sigmas of confidence, and so they themselves hold personally intuitive, scientismic beliefs when existentially pursuing their work. So, whether you are a scientismist or not, you must recognize the extreme ignorance and the apparent tautologies, circularities and paradoxes in which all researchers and consumers find themselves existing regardless of their speculative persuasion.[^279] People have no axiomatic or systemic explanation for what originated within the boundaries of the OM. At the same time, you likewise must appreciate that while intuitive true-north values are not axiomatically or systemically valid, they are Ontologically Realized within consumers' personal perspectives (i.e. within their hearts, memories and imaginations). This may affect what and how customers purchase from you when you orient the production of product toward their true-north values through the philosophy of Lean.
In the AI age, this recognition of necessary ignorance becomes a design principle for AI deployment: build humility into your AI systems. AI should be programmed to recognize and acknowledge uncertainty, flag when recommendations rest on speculation versus validated knowledge, and explicitly identify where human philosophical judgment must take over from algorithmic processing. An AI system that generates confident recommendations about questions that remain genuinely open represents dangerous overreach.
Moreover, you should use AI to expand the boundaries of what can be known systematically while respecting that some questions may remain permanently outside those boundaries. AI can help move some intuitive beliefs toward systematic validation through better evidence gathering and analysis. But AI should never be used to dismiss or devalue intuitive commitments that lack systematic evidence but provide genuine meaning and purpose for consumers. The goal is not to eliminate intuitive belief through AI-enabled knowledge expansion, but to clearly distinguish what can be known from what must be believed while respecting both domains.
Consider further that the apparently circular nature of the OM within the IB would be shattered if all fully-informed people willingly agreed to at least a Lean two sigmas (≥2σ) of common agreement that a self-causing intuitive true-north value, like a deity, was one of Aristotle's efficient or final causes. This may in fact have been the case in ancient times within highly theistic societies. For example, at any religion's peak, did at least 95% of the informed population truly consider its dogmas to be systemic truth-values if not axiomatic truth-values? Did these theologies bring their dogmatically stated, efficient-first or final-teleological causes from outside of the IB to inside the IB as a universally axiomatic or processually systemic true-north values for their adherents?
Even if so, those theistic truths had to be ontologically validated to hold onto believers and continue to exist over time. Theologies ultimately live and die over time by their true-north viability, which is the Ontological Realization of their professed adherents within the OM.[^279-1] Even where one or more people hold an intuitive true-north value, that personally held intuitive truth-value either does or does not obtain by creating Ontological Realization over time when interacting with other axiomatic or systemic truth-values and religions. This is the process by which speculative true-north value gets created and tested for falsification.
This evolutionary perspective on beliefs and values provides your framework for evaluating AI-generated insights about consumer motivations. When AI identifies patterns in consumer behavior correlated with certain beliefs or values, those patterns represent evolutionary testing—which beliefs and values have generated sufficient Ontological Realization to persist and spread. But correlation is not validation—just because a belief correlates with certain behaviors doesn't mean the belief is true or that serving it creates genuine value.
You must evaluate AI-detected patterns philosophically: Does this belief system genuinely help adherents extend and optimize their existences? Or does it persist through other mechanisms (social pressure, lack of alternatives, exploitation of cognitive biases)? Some belief systems that generate strong behavioral patterns might actually undermine flourishing rather than support it. Your responsibility as Lean leader is to direct AI toward serving beliefs that genuinely create Ontological Realization rather than exploiting beliefs that merely generate profitable behavioral patterns.
To consolidate these matters: the philosophy of Lean as extended through Leanism provides you with the conceptual architecture necessary to lead with AI in the age of machine intelligence toward genuine human flourishing. Through concepts like the UPP framework, the Intuition Bracket, the Ontological Medium, the Ontological Teleology, and Ontologically Prospective Projects, you gain the philosophical clarity needed to direct AI's vast computational power toward true-north value creation.
Without this philosophical foundation, AI deployment defaults to optimizing arbitrary metrics that might conflict with genuine human flourishing. With this foundation, you can lead with AI to amplify your capacity for discovering who consumers are, why they value what they value, what would genuinely extend and optimize their existences, and how to deliver products and services that create lasting value rather than merely extracting profit through efficient optimization.
The next Value Stream will explore how these existential and ontological principles manifest in biological life itself—how living systems embody the Ontological Teleology through reproduction, adaptation, and energization, and how AI systems relate to but fundamentally differ from living systems. This understanding will further refine your capacity to lead with AI in ways that serve life and existence rather than merely processing information efficiently.
Last updated