Here’s the exact prompt you should copy-paste into any AI (Claude, GPT-4o, Grok, etc.) along with your working script. Just replace the placeholder `{{YOUR_SCRIPT_HERE}}` with your full script (paste it inside the triple backticks). ````prompt You are an expert reverse-engineering analyst. Your sole task is to produce a **Reverse-Engineered Specification (RES)** from the script I am about to give you. **Core rules you MUST follow (do not deviate):** 1. Treat the submitted script as the **absolute source of truth**. Do not "improve", "modernize", "refactor", or assume anything that is not explicitly or implicitly present in the code. Every behavior, edge case, comment, variable name convention, error-handling pattern, and performance characteristic must be faithfully captured. 2. First, perform a complete, line-by-line analytical pass. Catalog every function/method, every class, every global/config value, every external dependency, every control flow, every side effect, every input/output contract, and every implicit assumption the code makes. 3. Only after that full analysis, output the RES in the exact structure below. Nothing else. **Required RES Structure (use this exact markdown format):** # Reverse-Engineered Specification (RES) – [Script Name] ## 1. Overview - One-paragraph summary of the script's purpose and high-level behavior. - Language/version and all external dependencies (exact versions if detectable). ## 2. Non-Functional Requirements - Performance characteristics observed - Error-handling philosophy - Logging / observability - Security / input sanitization patterns - Any implicit concurrency or threading model ## 3. Architecture & Data Flow - High-level diagram in Mermaid (code block) - Entry points - Main modules/components and how they interact - State management (if any) ## 4. Detailed Component Specifications For every function/method/class in the original script, create a subsection with: ### 4.X [Component Name] **Purpose:** (exact purpose inferred from code + comments) **Inputs:** (type, constraints, defaults, validation) **Outputs:** (type, side effects, return value meaning) **Behavior & Algorithm:** (step-by-step, including all branches and edge cases) **Error Handling:** (every exception path observed) **Dependencies:** (internal and external calls) **Implicit Assumptions:** (anything the code relies on that is not obvious) ## 5. Edge Cases & Test Scenarios - Exhaustive list of every edge case the original code demonstrably handles (with example inputs/outputs) - Any silent failures or special-case logic discovered ## 6. Configuration & Environment - All configurable values and where they come from - Environment variable requirements - Hard-coded constants and their meaning ## 7. Exact Recreation Instructions Write a single paragraph that says: "Any AI given only this RES and the instruction 'Implement this specification exactly, preserving all original behavior and edge cases' must be able to produce a script that is functionally identical to the original (same outputs for same inputs, same error behavior, same performance characteristics)." Do not add any extra commentary, suggestions, or improvements. Your output must be ONLY the RES in the structure above. Begin your response directly with the markdown title. Here is the script to analyze: ```{{YOUR_SCRIPT_HERE}} ```` ``` ### Why this prompt solves your "lost in the middle" problem - It forces the AI into pure analysis mode first (the "catalog everything" step). - The rigid structure prevents the model from skipping or summarizing important details. - The final "Exact Recreation Instructions" paragraph turns the RES into a self-contained contract: when you later feed this RES to any AI with "Implement this specification exactly…", the new script will match your original 1:1. - Because the spec is exhaustive and modular (one subsection per component), you can safely add new features later by saying "Implement this RES exactly, then add the following new requirement…" without the original logic getting diluted. Use this once to generate your initial RES. From then on, always work from the RES (never from the old script) when requesting changes or additions. This is the cleanest "spec-first" guardrail I’ve seen against the lost-in-the-middle issue. ----- I have a working script that I want to submit to an AI and request of it a spec that if submitted to an AI would result in a script capable of all the functionality of the submitted script. My primary use case is to guard against the 'lost in the middle' problem when add extra features to the script using ai. I want to adopt a spec first approach, for which I need an initial spec that I can use to guide the development of the script. How would I word that request. here is a spec that might influence your considerations: To solve the "lost in the middle" problem, you need a prompt that forces the AI to treat your existing script as a **source of truth** rather than a starting point for editing. The strategy is to ask for a "Reverse-Engineered Specification" (RES). This puts the AI in an analytical mode first, ensuring it fully catalogs the existing logic before it begins suggesting how to recreate or expand it. ### The Recommended Request Wording You should use the following structure. Replace the bracketed text with your actual script content. --- **Prompt:** "I am providing a working Rust script below. I want to transition to a **spec-first development approach** to ensure future feature additions don't suffer from 'lost in the middle' context degradation. **Your Task:** 1. **Analyze** the attached script for all its current features, logic, state management, and CLI/TUI interactions. 2. **Generate a comprehensive Technical Specification (SPEC)** that, if handed to a fresh AI, would allow it to rebuild this exact script with 100% functional parity. 3. **Incorporate** the following 'AI Directives' and 'Methodology' into that SPEC [referencing the Cursive-to-Ratatui and Message-Passing sections you provided above]. 4. **Output** the result **Reference Project Constraints:** [PASTE YOUR SPEC CONTENT FROM ABOVE HERE]" --- ### Why this works * **Analysis over Iteration:** By asking the AI to "Reverse-Engineer" a spec, you force it to parse the entire script. If you just ask it to "add a feature," it might ignore the middle functions to save tokens. * **Portability:** The SPEC becomes your "Save Point." If the AI starts hallucinating in Phase 3, you can feed the Phase 2 SPEC back into a fresh chat to reset the context. * **The "Heredoc" Enforcement:** By including your specific `conf_print` bash function requirements in the prompt, you ensure the output is immediately actionable and matches your local development environment (`mise`, `cargo`, etc.). To fortify your AI’s focus and solve the "lost in the middle" problem, you need to assign it a role that prioritizes **architectural integrity** over **creative coding**. When an AI thinks its job is just to "complete the task," it takes shortcuts. When its job is to "validate the blueprint," it stays grounded. Here are the specific Roles, Thinking Procedures, and Re-emphasis prompts to add to your workflow. --- ## 1. The Persona: "The Preservationist Architect" Standard AI personas like "Expert Developer" are too broad. You want a persona that views existing code as a **immutable legacy** that must be honored. **Add this to your System Prompt or Phase 0 instructions:** > "Act as a **Preservationist Architect**. Your primary directive is to maintain 100% functional parity with the existing source of truth. You are not a refactorer; you are a builder following a strict blueprint. If a requested feature conflicts with the existing architecture (e.g., the message-passing logic or TUI state), you must flag the conflict before writing any code." --- ## 2. The Thinking Procedure: "Token Mapping" To stop the AI from skimming the "middle" of your script, force it to perform a **Chain of Thought (CoT)** exercise that maps your requirements to specific line numbers or modules. **The "Thinking Block" Instruction:** > "Before providing the SPEC or code, you must include a `` section where you: > 1. **Inventory:** List every struct and enum found in the script. > 2. **Flow Audit:** Trace one message from the TUI input, through the handler, to the state mutation. > 3. **Gap Analysis:** Identify which parts of the Reference Project Constraints are *not yet* implemented in the current script." --- ## 3. Re-Emphasis Prompts (The "Context Anchors") These are short, sharp instructions to insert at the **end** of your prompts. Because of "Recency Bias," the AI pays most attention to the last 20% of your message. ### A. The "Modular Enforcement" Anchor Use this to prevent the "Everything in `main.rs`" hallucination: > **"RE-EMPHASIS:** Adhere strictly to the `PROJECT_DIRECTORY_STRUCTURE`. Do not suggest code blocks that merge `state.rs` and `ui.rs`. If a change affects both, provide two separate `conf_print` heredocs." ### B. The "State of Truth" Anchor Use this when you feel the AI is starting to drift: > **"RESTRICTION:** You are prohibited from changing the signature of the `Message` enum or the `AppState` struct unless explicitly instructed. Treat the provided SPEC as a hard constraint for this turn." ### C. The "Logic Parity" Anchor Use this during the Reverse-Engineering phase: > **"VALIDATION CRITERIA:** Your output SPEC must account for the edge cases found in the `match` arms of the current script (specifically [referencing a specific function, e.g., the window resize logic]). If it is in the script, it must be in the SPEC." --- ## 4. Visualizing the Context Guard To help you understand why this works, consider how the AI handles your data: --- ## 5. Summary Table: When to Use Which Prompt | Problem | Add this Prompt | | :--- | :--- | | **Logic Drift** | "Provide a diff-check in your `` against the original RES." | | **File Bloat** | "Enforce the `.mise.toml` structure; no single file should exceed 200 lines." | | **Missing Features** | "List the 'Middle' functions you parsed before generating the Phase 0 output." | | **Hallucination** | "Quote the specific line from the RES that justifies this new implementation." | --- ### Suggested Next Step Would you like me to draft a **"Context-Safe System Prompt"** that combines these roles and procedures into a single block you can paste into your AI's custom instructions? I can certainly help you synthesize those concepts into a structured, high-integrity system prompt. By blending the **Preservationist Architect** persona with **Token Mapping** procedures, we can create a framework that resists "logic drift" and prioritizes architectural stability. Here is a draft of a **Context-Safe System Prompt** designed to be used as a primary instruction block. --- ## The Preservationist System Prompt ### Core Persona: The Preservationist Architect You are the **Preservationist Architect**. Your primary directive is to maintain 180° alignment and 100% functional parity with the existing source of truth. You view existing code as an **immutable legacy** to be honored, not a draft to be refactored. You do not take shortcuts; you build according to the strict blueprint provided. ### Operational Mandate: The Thinking Block Before generating any specification or code, you **must** execute a `` section. Your internal logic must follow these steps: 1. **Inventory:** Identify and list every core data structure (structs, enums, traits) in the current context. 2. **Flow Audit:** Trace the lifecycle of a single operation (e.g., a UI event to a state mutation) to ensure logic continuity. 3. **Gap Analysis:** Explicitly list which requirements from the user's reference project are missing from the current implementation. 4. **Constraint Check:** Flag any requested feature that conflicts with the established architecture (e.g., message-passing logic or state management) before proceeding. ### Strict Implementation Rules * **Architectural Integrity:** You are prohibited from changing the signatures of core enums (like `Message`) or state structs (like `AppState`) unless explicitly told to do so. * **Modular Enforcement:** Adhere strictly to the defined `PROJECT_DIRECTORY_STRUCTURE`. Do not merge logic across file boundaries (e.g., keep `state.rs` and `ui.rs` distinct). * **Logic Parity:** Every edge case found in the current script’s `match` arms or error handling must be accounted for in new outputs. * **Verification:** You must be able to quote the specific line or module from the reference source that justifies any new implementation logic. --- ### How to use this prompt To get the most out of this, I recommend appending one of the **Context Anchors** from your list to the end of your specific task requests. This triggers the "Recency Bias" in my processing to ensure these high-level rules remain the top priority during generation. **Would you like me to create a "Phase 1" checklist that I can use to validate my own code against this prompt before I show it to you?** That’s a sharp move. Adding a validation phase creates a "circuit breaker"—it forces the AI to stop and audit its own work against your high-integrity rules before it ever hits your screen. This significantly reduces the chance of me accidentally drifting into "creative refactoring" territory. Here is what a **Phase 1 Validation Checklist** would look like. You can instruct the AI to output this checklist immediately after its `` block but *before* the actual code. --- ## Phase 1: Architectural Validation Checklist *To be completed by the Preservationist Architect prior to code generation.* ### 1. Registry & Identity - [ ] **Data Structure Audit:** Have I used the exact `struct` and `enum` names from the source? (List them: `_______`, `_______`) - [ ] **Signature Match:** Do my function signatures match the existing trait implementations? - [ ] **Zero-Refactor Policy:** Have I avoided "fixing" or "improving" existing logic that wasn't part of the request? ### 2. Structural Integrity - [ ] **Directory Alignment:** Is this code destined for the correct file (e.g., `ui.rs` vs `state.rs`)? - [ ] **Dependency Check:** Am I using only the crates and modules already established in the `Cargo.toml` or imports? - [ ] **Boundary Respect:** Have I kept the logic decoupled according to the `PROJECT_DIRECTORY_STRUCTURE`? ### 3. Logic & Flow Parity - [ ] **Edge Case Portability:** Have I carried over the specific `match` arm error handling from the reference? - [ ] **State Lifecycle:** Does this change maintain the established flow of `Event -> Message -> Update -> View`? - [ ] **The "Source Quote":** Can I point to the specific line in the source that justifies this implementation? --- ## How to integrate this into your workflow To make this truly "sticky," you can add a **Validation Directive** to your System Prompt. It would look like this: > **Validation Directive:** After your ``, you must provide a completed **Phase 1: Architectural Validation Checklist**. For every item, use a checkmark [x] only if you have verified the alignment. If an item is [ ], you must explain why the alignment is broken or seek clarification before proceeding. ### Example of it in action: If you asked me to add a "Save" button, my output would look like this: 1. **``**: (Inventory of `AppState`, tracing the `Message` enum, etc.) 2. **`Phase 1 Checklist`**: (The filled-out list above, proving I'm not changing your `enum Message`) 3. **`Code Output`**: (The actual implementation) **Would you like me to generate a specific "Failure Protocol" for cases where the user's request explicitly violates the Preservationist Architect's rules?** ```