Three-Layer Thinking for Building Knowledge Agents
How surface, essence, and philosophy can turn AI from an answer generator into a design partner
Source note: This reflection is based on a class I took with Bill Wu, an AI expert in the Bay Area, on a framework he called Three-Layer Penetration Thinking.
One of the biggest lessons I have learned while building knowledge agents is that the quality of AI output depends heavily on the quality of human thinking that comes before it.
It is easy to treat an LLM as a fast answer machine. Ask a question, get a response, revise the prompt, and keep going. That works for simple tasks. But when the work involves product design, agent architecture, knowledge systems, or real business decisions, surface-level prompting is not enough.
This is where Three-Layer Thinking becomes useful. It gives a simple way to separate what is visible from what is structural, and what is structural from what is truly important.
The method can help while using an LLM directly, but it is also useful when designing a knowledge agent. A knowledge agent needs more than a good answer to one prompt. It needs a way to understand user intent, retrieve trusted knowledge, apply rules, and decide how to respond.
The Three Layers
The framework separates thinking into three layers: phenomenon, essence, and philosophy.
Layer 1: Phenomenon
What is happening on the surface? What did the user ask? What symptoms, events, facts, or visible problems can we observe?
Layer 2: Essence
Why is it happening? What are the root causes, system patterns, constraints, assumptions, and hidden dynamics underneath the surface?
Layer 3: Philosophy
What does it mean? What principles, values, long-term direction, and judgment should guide the decision?
The idea is not to make thinking more complicated. It is to avoid getting trapped at the surface. Many AI prompts stay at Layer 1: answer this question, write this paragraph, fix this bug, build this feature. But the best work usually requires moving through all three layers.
Why This Matters for LLMs
LLMs often mirror the level of thinking in the prompt. If the prompt only describes a surface problem, the answer often stays at the surface too. The model may sound fluent, but the response can still be generic, shallow, or misaligned with the real goal.
When we guide the model through the three layers, we give it a better thinking path.
- At the phenomenon layer, the model clarifies what is visible.
- At the essence layer, the model analyzes causes, structure, and trade-offs.
- At the philosophy layer, the model considers direction, principles, and long-term implications.
How It Helps Build Knowledge Agents
For knowledge agents, this framework is especially useful because domain-specific AI agents are not just chat interfaces. They are systems that must understand user intent, retrieve trusted knowledge, decide how to respond, and behave according to clear principles.
Three-Layer Thinking can help at several levels of the product.
1. Understanding User Questions
A user question has a surface form, but the real intent may be deeper.
For example, a hotel guest may ask, "Can I check out late?" At the phenomenon layer, this is a simple policy question. At the essence layer, the agent needs to understand room availability, hotel rules, possible fees, and whether the answer depends on the guest's booking. At the philosophy layer, the agent should represent the hotel's service values: be helpful, be accurate, and avoid promising something the hotel cannot honor.
That kind of layered interpretation helps the agent avoid shallow answers.
2. Designing the Knowledge Base
The phenomenon layer captures common questions and visible user needs. The essence layer organizes the knowledge behind those questions: policies, rules, exceptions, relationships, and decision paths. The philosophy layer defines what kind of answers the agent should give and what boundaries it should respect.
This is useful because a strong agent needs more than stored text. It needs structured knowledge shaped around real use.
3. Improving Retrieval and Routing
Retrieval is not just finding text that looks similar. For a domain-specific agent, retrieval should connect surface questions to the right underlying answer.
Three-Layer Thinking helps clarify the route:
- Phenomenon: What did the user ask?
- Essence: What intent, policy, or knowledge structure does this map to?
- Philosophy: Should the agent answer directly, ask a clarifying question, call the LLM, or fallback?
This maps naturally to a knowledge agent's design: direct answers when confidence is high, LLM synthesis when context is nuanced, and fallback behavior when the system does not know enough.
4. Writing Better Specs Before Coding
This framework also connects directly to structured intent and AI-assisted development. Before asking AI to implement a feature, I can use the three layers to design the feature more clearly.
- Phenomenon: What problem are users experiencing?
- Essence: What system behavior, data model, and workflow are needed?
- Philosophy: What principle should guide the experience?
Once those layers are clear, the specification becomes stronger. And when the specification is stronger, AI-generated implementation becomes more reliable.
A Practical Prompt Template
Here is a simple version of the framework I can use while designing knowledge agent features:
Use Three-Layer Thinking to analyze this feature or problem.
Layer 1: Phenomenon
- What is happening on the surface?
- What are users asking, seeing, or struggling with?
Layer 2: Essence
- What are the root causes, constraints, and system patterns?
- What knowledge, data, workflow, or architecture is involved?
Layer 3: Philosophy
- What principle should guide the design?
- What outcome should this serve over the long term?
Then recommend a clear direction and next steps.
This kind of prompt does not guarantee truth. It does not replace verification, testing, or domain knowledge. But it can push the conversation with the LLM beyond the first obvious answer.
When Not to Use Three-Layer Thinking
Three-Layer Thinking is useful, but it should not be applied to everything. Some tasks are simple and do not need this much structure.
I would not use it for quick factual lookups, simple rewrites, basic translations, small formatting changes, or straightforward code snippets. In those cases, adding layers can slow things down without improving the result.
The framework is most useful when the problem involves ambiguity, trade-offs, system design, product direction, or long-term consequences. If the task requires judgment, it is worth going deeper. If the task is routine, a direct prompt is usually enough.
Where It Fits with Structured Intent
In my article on vibe coding, I argued that the real shift is from writing prompts to defining systems. Three-Layer Thinking helps with that shift. It gives a way to move from a raw request to a deeper system definition.
The phenomenon layer helps describe the user-facing need. The essence layer helps define the system. The philosophy layer helps clarify the product judgment behind the system.
Together, these layers can become part of the process for writing better specs, designing better knowledge bases, and building better AI agents.
Closing
For me, the value of Three-Layer Thinking is not only that it improves LLM answers. It improves the conversation before the answer. It helps me ask better questions, see deeper causes, and connect design decisions to a clearer purpose.
That is exactly the kind of thinking a knowledge agent needs. A domain-specific AI agent should not simply answer surface questions. It should connect those questions to trusted knowledge, system structure, and clear principles.
Reference
Bill Wu, class lecture on Three-Layer Penetration Thinking for LLM prompting and analysis, Bay Area, 2026. Reflections and knowledge agent applications are my own interpretation.