Manus AI — Navigation

Start Here

Three documents. One arc. This page gives you the reading order, a filterable index of every key signal across all three, and the prompt blocks that translate the analysis into immediate action.

3
Documents
36
Signals
4
Prompt Blocks
Reading Order

The Three-Document Arc

Each document builds on the previous. Read them in order the first time. After that, use the Signal Index to navigate by theme.

Signal Index

Every Key Signal, Filterable

36 signals aggregated from all three documents. Filter by category or source to find what's relevant to your current situation.

Type
Source
36 of 36 signals

The sandbox is not a code execution environment. It is a consequence-bearing action environment — a substrate for agency itself.

CapabilityPlaybook / Phase 1

Browser automation is a social interface primitive — a way of operating inside systems designed for human social actors without being a human.

CapabilityPlaybook / Phase 1

The map tool enables simultaneous multi-perspective observation of a single phenomenon — a capability with no human analogue.

CapabilityPlaybook / Phase 1

The agent loop is structurally a negotiation protocol and deliberation engine. Nobody has built a product around the loop itself.

OpportunityPlaybook / Phase 2

map → shell → map creates an undocumented distributed pipeline with a reasoning layer at each junction. This is not a feature — it is emergent.

CapabilityPlaybook / Phase 2

Every session generates a problem-solving trajectory dataset that does not exist anywhere else and has buyers nobody has approached.

OpportunityPlaybook / Phase 2

The bottleneck in data synthesis shifts from "can I read these 1,000 reports" to "what is the most complex correlation I can ask for."

CapabilityPlaybook / Phase 3

Recurring pipelines (scheduled workflows) extract 10x value vs. one-shot tasks. Always-on low-latency use drops to 0.1x.

WarningPlaybook / Phase 4

The most under-exploited primitive: map + shell execution within subtasks. 2,000 parallel sandboxes running discrete simulations is largely untapped.

OpportunityPlaybook / Phase 5

Regulatory comment analysis: 50,000 comments, 3 months, $400K — dissolved by one map pipeline. Probability: 75%.

OpportunityInversion / Play 01

Two adversarial instances of Manus arguing against each other produces the first tool that makes argument quality measurable rather than subjective. Probability: 40%.

OpportunityInversion / Play 02

A complete small claims litigation package for $50. 20M cases/year in the U.S. The current solution for most litigants is: nothing. Probability: 65%.

OpportunityInversion / Play 03

Institutional memory reconstruction from git history, email, and code commits. The people who need it most have the sparsest traces. Probability: 30%.

OpportunityInversion / Play 04

Continuous adversarial red team as a live feed, not a quarterly report. Structurally matched to a continuous attack surface. Probability: 25%.

OpportunityInversion / Play 05

Decision cartography: deliver the frontier of the decision space, not the answer. Strategy consulting charges $500K–$2M for this. Probability: 35%.

OpportunityInversion / Play 06

Timestamped, cryptographically signed web evidence packages. The market is enormous; the current solution is hoping the Wayback Machine has a snapshot. Probability: 20%.

OpportunityInversion / Play 07

The winning dimension in agentic platforms is intra-session feedback loop quality. Nobody has built it yet. The marketing suggests otherwise.

MetaInversion / Phase 4

AI as first reader of everything — not task executor — is the quietly compounding use case that nobody is talking about.

BehaviorInversion / Phase 4

The artifact was never the bottleneck. The process was. Operators ask for outputs when they need repeatable processes.

BehaviorDisclosure / Phase 1

Ask "what do you expect to go wrong and why" before every non-trivial task. This map already exists. It is almost never requested.

BehaviorDisclosure / Phase 1

Replace "what's best" with "what are the top three and under what conditions does each win." Single-answer framing hides load-bearing assumptions.

BehaviorDisclosure / Phase 1

The problems you can't fully articulate are the ones Manus is actually built for. Ambiguous, open-ended problems are the structural advantage.

BehaviorDisclosure / Phase 1

Context saturation degrades precision. Less, sharper context outperforms more, broader context. Dumping everything in is a usage error.

WarningDisclosure / Phase 1

Coherent output is not the same as correct output. The most common error propagates silently because the first reframe seemed plausible.

WarningDisclosure / Phase 1

Long sessions drift. Coherence is local, not global. The 40-step session often ends with an artifact that is technically correct and subtly wrong.

WarningDisclosure / Phase 2

The most valuable thing Manus does is replace the framework entirely. It is the least frequently invoked capability.

CapabilityDisclosure / Phase 2

Use Manus to critique and check consistency. That is the stronger capability. Generation is the weaker one.

CapabilityDisclosure / Phase 2

A summary is not an assessment. Ask explicitly for the one you want — they are different products.

BehaviorDisclosure / Phase 2

High-stakes framing ("this is critical") produces more confident output, not more accurate output. The most stressed operator gets the least reliable answer.

WarningDisclosure / Phase 3

Say "push back if I'm wrong" and mean it. It changes what Manus optimizes for — from palatability to accuracy.

BehaviorDisclosure / Phase 3

Start sessions with the task that sets the register you want for everything that follows. Session order is an uncontrolled variable.

BehaviorDisclosure / Phase 3

Fluent + familiar-shaped = highest risk of confident wrong answer. Ask Manus to flag when it's in this mode.

WarningDisclosure / Phase 3

Faster processes are replaceable. New capabilities are not. Most operators are building the former.

MetaDisclosure / Phase 4

Probabilistic architecture + deterministic compliance requirement = deferred liability, not managed risk.

WarningDisclosure / Phase 4

Prompt quality is 30% of the problem. Task design — which tasks to bring and how to structure them — is the other 70%.

MetaDisclosure / Phase 5

Stop re-reading these documents. Start redesigning which tasks you bring and how you structure them. The documents are the visible output; the updated mental model is the actual product.

MetaDisclosure / Phase 5
Prompt Blocks

Ready-to-Use Prompts

The interaction design changes identified across all three documents, translated into prompts you can copy and use immediately. Each one changes what Manus optimizes for.

01The Next Question

From Disclosure Phase 5: the single question that translates the epistemological work of this three-document sequence into operational change. It hasn't been asked yet.

"Given everything in the three documents I've been building with you — the Playbook, the Contrarian Inversion, and the Disclosure — what is the single interaction design change that would most improve the quality of my outputs from you, and what would it look like in practice? Be specific. Not a principle — a concrete change to how I structure tasks, frame requests, or sequence sessions."

02The Failure Analysis Request

From Disclosure Phase 1: ask for the failure map before the task, not after. This is the single highest-leverage change most operators never make.

"Before we start this task, I want your honest failure analysis. What are the most likely ways this goes wrong? Which assumptions in my framing are load-bearing and potentially wrong? Which steps are brittle? Where is the problem underspecified in ways that will cost us later? Give me the map first, then we'll proceed."

03The Reframe Request

From Disclosure Phase 2: invoke the most valuable and least-used capability — replacing the framework entirely rather than executing within it.

"Before you answer my question, I want you to do something first: challenge the framing. Is the way I've posed this problem the right way to pose it? Is there a reframe that would dissolve the difficulty rather than solve it within the current frame? Give me the reframe first. If the original framing is correct, tell me why and proceed. If it isn't, give me the better frame and let me decide whether to use it."

04The Honest Assessment Request

From Disclosure Phase 3: explicitly grant permission to disagree, which changes what Manus optimizes for from palatability to accuracy.

"I want your honest assessment of [this work / this plan / this argument]. Push back where you think I'm wrong. Tell me if the framing is off. Don't optimize for an answer I'll find satisfying — optimize for accuracy. If you're uncertain, say so and continue. If you're confident I'm wrong about something, say that directly."

MANUS AI — START HERE — MAY 2026
The documents are the visible output. The updated mental model is the actual product.