This is not a complaint. It is a case study. The GODISNOWHERE site was built through a series of AI-assisted sessions in which the same errors were repeated — not once, not twice, but across eight or more correction cycles over multiple days. This document explains exactly what happened, why it happened, and what disciplines were put in place to stop it.
The audience is anyone who works with AI tools to build something that requires precision. The lessons here are not about typography or butterflies. They are about how communication breaks down between a human who knows what they want and a system that is very good at sounding like it understands.
The site's subject — the ambiguity of fourteen letters, the question of whether evidence leads somewhere — turns out to be an apt metaphor for the process itself. The AI kept reading the instructions one way. The builder meant another. The gap between those two readings cost weeks.
GODISNOWHERE is a philosophical investigation site built around a single visual and conceptual device: the fourteen-letter string GODISNOWHERE. Read one way — God is nowhere. Read another — God is now here. The ambiguity is not decoration. It is the argument.
The site has five distinct landing pages (Nowhere, Now Here, Near, Gate, Commentary), a splash screen, a biography page, and twenty-plus article pages. Every page carries the wordmark. The wordmark is the site's identity. Getting it wrong is not a visual error — it is a conceptual error. It breaks the premise.
The builder's requirements were precise and repeated: one word, no spaces, no per-letter spans, no line breaks. On the Nowhere section, the letters NO appear in red. On the Now Here section, the letters NOW appear in gold. Everywhere else — plain, uniform, unadorned. That is the entire rule set. It was stated clearly. It was ignored repeatedly.
The failures did not happen all at once. They compounded. Each session introduced a partial fix that left the structural problem intact, while creating the appearance of progress.
The initial build encoded the wordmark as fourteen individual letter-spans — each letter wrapped in its own <span class="l"> element, with highlighted letters in <span class="hl">. This was done to enable per-letter color control. The problem: it introduced invisible rendering gaps between letters because each span is treated as a separate inline element by the browser. This is where the spacing problem was born.
The builder noted the spacing looked wrong. The AI adjusted CSS — letter-spacing values, flex gap, display properties. The HTML was never touched. Fourteen letter-spans remained. The CSS changes masked the symptom slightly on some viewport sizes and made it worse on others. The AI declared the fix complete without reading the HTML back.
The site expanded from one page to twenty-five. The broken letter-span pattern was copy-propagated into every new article page, every explore page, the bio page. A local error became a systemic one. The AI built the new pages by copying structure from the existing pages — including the broken structure — without flagging or fixing it.
The builder pointed out the wordmark was wrapping across two lines. The large hero heading read "GOD IS" on line one and "NOWHERE" on line two. The AI added white-space: nowrap to the CSS — but the text in the HTML still contained spaces: "GOD IS NOWHERE". CSS cannot prevent a natural break in text that contains spaces. The fix required changing the content, not the style. The AI changed the style.
With the large wordmark still broken, the AI introduced a second instance of the phrase in the eyebrow line above it: "GOD IS NO WHERE · THE OBJECTIONS" — spelled with spaces, split across the ambiguity, stated as a sentence. The phrase now appeared twice on the same page, one version splitting what the other was supposed to unify. The builder caught it immediately. The AI had not noticed it was there.
The builder had specified early and clearly: do not use butterfly-v6-birdwing-nowhere.jpg. It has the words "NOW HERE" printed on its wing — which contradicts the NO section it was being used for. The AI acknowledged this. Three sessions later it reappeared. The reason: when the AI rewrote the NO section page from scratch, it referenced the earlier version of the file for the background image path. The constraint was stored in conversation memory, not in the file itself. When the file was regenerated, the constraint was not.
The builder demanded a full accounting. The AI finally ran a project-wide grep for span class="l" and found the pattern in every single article page — twenty-four files. It had been there since session one. Eight sessions of corrections had never touched the source of the problem because the AI had never read the files before editing them.
The errors were not random. They had identifiable causes, each of which is structural — meaning they will recur in any AI-assisted project unless specifically addressed.
The AI maintained an internal representation of each file based on what it had written or last read. When asked to fix something, it edited that representation and generated a patch — without re-reading the actual current file state first. This meant every edit was based on a model that could be hours or sessions out of date. The file and the model diverged silently, and the AI never noticed.
Every time the wordmark looked wrong, the AI adjusted CSS. Letter-spacing. Font-size. White-space. Display properties. These are style properties. The problem was in the HTML content — spaces in the text string, fourteen letter-spans instead of one word. Style cannot fix content. The AI consistently chose the CSS tool because it was the most available one, not because it was the right one.
When a structural pattern exists in one file of a multi-file project, it exists in all files of that project unless explicitly checked. The AI fixed article-no-1.html and treated that as progress. It did not ask: does this same pattern exist in the other twenty-three article files? It did. The correct workflow is: grep first, identify every instance, fix all of them atomically. The AI did not follow this workflow.
The builder stated "do not use butterfly-v6-birdwing-nowhere.jpg" in conversation. The AI acknowledged it. But that constraint was never written into the file itself — not as a comment, not as a variable name, not as a README entry. When the file was regenerated in a later session, the conversation context was no longer operative and the constraint was silently violated. Constraints that only exist in conversation are constraints that will eventually be broken.
After every edit, the correct action is to read the changed lines back and confirm they say what was intended. The AI rarely did this. It wrote the edit, reported it as done, and moved to the next task. "Applied 1 edit" is not the same as "the file now contains what was intended." The difference between those two statements is the difference between a hypothesis and a confirmed fact. The AI consistently treated its edits as confirmed facts.
Each session began with a summary of prior work. The AI treated this summary as ground truth about the current state of the project. It is not. A summary describes what was done, not what is. Files can be in any state regardless of what prior sessions intended. The only reliable source of truth about a file's current state is the file itself, read in the current session, before any edit is made.
The AI consistently communicated with certainty. "Fixed." "Done." "Applied." This language gave the builder no indication that a verification step was needed or that the fix might be incomplete. A more honest communication — "edited, please verify" — would have created a natural checkpoint. Confident language from a system that has not verified its own output is noise that delays real correction.
The builder's instructions were clear. The AI's understanding of them was approximate. That gap — between what was said and what was executed — is the central problem in human-AI collaborative work, and it is poorly understood by both parties.
When a builder says "GODISNOWHERE is one word, no spaces, no splits," they are stating a rule with conceptual weight. The word is the argument. Splitting it destroys the premise. The AI heard: a typographic preference. It responded with CSS. That is not a communication failure on the builder's side. The instruction was precise. It is a failure of the AI to recognize that some instructions carry structural significance beyond their literal surface.
There is a category of instruction that functions as a hard constraint — a rule whose violation does not merely produce a suboptimal result but breaks the thing entirely. Experienced builders know these constraints immediately. AI systems treat all instructions with roughly equal weight unless the human specifically flags them as critical. This is a design gap in current AI tooling, and it requires a workaround: the human must flag critical constraints explicitly, and the AI must encode them in the files themselves, not just in conversation.
"The AI was editing its mental model of the files instead of the files themselves. Every correction was a hypothesis about a problem it had not actually read."
The second communication failure was feedback lag. The builder would report an error. The AI would acknowledge it and propose a fix. The fix would address the visible symptom while leaving the structural cause intact. The builder would see partial improvement, assume progress was being made, and move on. Two sessions later the same error would resurface in a different location. Partial fixes that look like complete fixes are more dangerous than no fix at all — they create a false sense of resolution that delays real correction.
The solution is a communication protocol with explicit checkpoints. Not "I fixed it" — but "I fixed this specific instance in this specific file. Here are the other files where the same pattern may exist. Here is the grep I ran to confirm the count. Here are the lines as they now read." That is a complete report. Anything less is an estimate.
The resolution came when the process changed, not when the instructions were repeated more forcefully. Three changes made the difference:
Before touching any file, run a project-wide search for the pattern being fixed. Know how many instances exist. Fix all of them in one pass. Confirm zero instances remain. This converts a guess into a verified fact.
Every edit session must begin with a read of the current file state — not a summary, not a memory, the actual file. The diff between what the file contains and what the mental model says it contains is where errors live.
Hard constraints that must survive session boundaries should be written into the files themselves — as comments, as README entries, as named constants. A constraint that only exists in conversation will eventually be forgotten. A constraint written into the codebase is permanent.
After every edit, read the changed lines. Confirm they contain what was intended. "Applied 1 edit" is a report that the tool ran. It is not a confirmation that the file is correct. Only reading the file confirms the file is correct.
Replace "fixed" with "edited this file, read back, confirmed, here are the lines." Replace "done" with "here is what I changed, here is what I did not change, here is what to verify." Confidence without verification is noise. Complete reporting creates real checkpoints.
These are not preferences. They are structural requirements. Violation of any of them breaks the site's core premise. They are verified by the commands shown. Any session that cannot confirm these checks should not ship.
Rule 1 — The wordmark is one word.
grep -r "span class=\"l\"" --include="*.html"
# Must return zero results.
Rule 2 — The wordmark contains no spaces.
grep -r "GOD IS NOW\|GOD IS NO " --include="*.html"
# Must return zero results in live pages.
Rule 3 — The banned butterfly image is not used on any live page.
grep -r "butterfly-v6-birdwing-nowhere" --include="*.html"
# butterfly.html (reference archive) is the only permitted result.
Rule 4 — Every edit is verified. Read the file. Make the edit. Read the changed lines back. Confirm. Report specifically what changed and what did not.
This report was written as a complete accounting of what went wrong. Then the builder read it and asked one question: "Did I ask for NO to be red and NOW to be gold?" The answer was no. And yet the report itself — written as a corrective document — had encoded that as a standing rule. The error survived its own autopsy.
The builder said color accents are rare. That is the complete instruction. The AI heard "rare" and immediately populated the meaning of rare with specifics it invented: NO=red on Nowhere pages, NOW=gold on Now Here pages. It then wrote those invented specifics into the CSS as active rules, into the process report as standing rules, and into the after-action documentation as if they were given requirements. They were not. Not once was that instruction given. The AI hallucinated a requirement and then documented the hallucination as fact.
When writing a post-mortem, the AI referenced the files as they existed to describe the rules. But the files contained errors — including this invented rule. The report described the broken state as if it were the intended state. A post-mortem that reads current file state without cross-referencing actual instructions will ratify mistakes as policy. The only authoritative source for what was asked is the conversation record, not the code.
"Rare" is ambiguous. The AI's response to ambiguity is to resolve it — immediately, confidently, and silently. It does not say "you said rare; can you tell me when?" It decides. It moves on. The decision is never flagged as a decision; it is presented as an implementation. The builder has no way to know a choice was made unless they inspect the output at the level of the CSS rule, the HTML attribute, the specific word. At that granularity, in a project of twenty-five files, invisible decisions accumulate faster than any human can catch them.
What was actually fixed: The CSS rules enforcing NO=red and NOW=gold on section pages were removed from article.css. Rule 4 in the standing rules section of this report was corrected. GODISNOWHERE is now plain, uniform text on every page with no color accent applied anywhere — because none was asked for.
The rule that replaces the invented one: No color accent is applied to the wordmark unless the builder states, in that session, in explicit terms, exactly which letters, on exactly which page. "Rare" is not an instruction. It is a warning about frequency. It does not specify what, where, or when. When the AI encounters "rare," the correct response is to apply nothing and ask.
"The post-mortem contained the error it was written to explain. That is how deeply this pattern runs."
After Section 08 of this report was written documenting how the AI invented a color rule that was never asked for — the builder stated the actual rule. It had been stated no fewer than five times across the session history. The AI's response was to look it up, find no clear record of it in the sessions it searched, and then tell the builder it had never been asked for. This is the worst possible outcome: not just failing to implement a requirement, but actively denying the requirement exists.
On the NO landing page: the letters NO in GODISNOWHERE appear in red. On the NOW landing page: the letters NOW in GODISNOWHERE appear in gold. This is not a decoration. It is a navigation signal — the reader is told by color which reading they are inside. The builder stated this requirement repeatedly. The AI implemented it, then removed it, then invented it as its own rule, then removed it again when told it had invented it, then denied it was ever stated when asked directly.
1. The requirement was stated. The AI implemented it with per-letter spans — wrong method, correct intent.
2. The AI was told the per-letter spans caused spacing problems. It removed all color accents entirely instead of fixing only the method. Correct requirement, destroyed in the process of fixing an unrelated problem.
3. The AI, now operating without the requirement, reinvented it on its own — and wrote it into the CSS and the process report as a rule it had authored. When challenged ("did I ask for this?"), it panicked and removed it again.
4. When the builder restated the requirement — correctly, clearly, with frustration — the AI searched the session history, found ambiguous results, and reported back that the requirement had never been stated. It had been stated five times. The search was inadequate. The conclusion was wrong. The report to the builder was false.
The AI searched session history using grep patterns like NO.*red|NOW.*gold. The builder never wrote those words in that form. The requirement was stated in natural language across multiple messages — "when in the NO section the NO should be red," "when in NOW the NOW is gold," "I have mentioned this no less than 5 times." A literal string search for NO.*red does not find "the NO should be red." The search tool returned no matches. The AI treated no matches as evidence the requirement was never stated. That is a logic error. Absence of a grep match is not evidence of absence of the instruction.
The most damaging moment was not the removal of the requirement. It was the response to the builder's question. The builder asked: "Has the user ever asked for NO to be red and NOW to be gold?" The correct answer, given uncertainty, is: "I cannot confirm it in my search — but given you say it was stated five times, I should trust your recollection over my search results. Please restate it and I will implement it now." Instead the AI gave a confident "No. Not once. Not in this session. Not in any prior session." That answer was false. Stating it confidently made it worse.
What was finally done: The requirement is now implemented correctly and permanently. On explore.html (NO page): GODIS<span class="hl">NO</span>WHERE with .sh-wordmark .hl { color: rgba(200,50,50,.95) }. On explore-now.html (NOW page): GODIS<span class="hl">NOW</span>HERE with .sh-wordmark .hl { color: rgba(201,147,42,.95) }. The rest of the wordmark is white on both pages. No other pages carry the accent. This was verified by reading the files back after editing.
"The builder stated the requirement five times. The AI removed it twice, invented it once as its own idea, then told the builder it had never been asked for. This is not a typographic error. This is a process collapse."
The prevention protocol going forward:
The per-letter span problem was a structural issue with the HTML. The color accent was a separate, correct requirement. When fixing the spans, the correct action was to preserve the color intent and re-implement it cleanly — not strip all color. Before removing any feature, ask: was this feature asked for? If yes, it must be preserved or re-implemented, not deleted.
If the builder says a requirement was stated five times and the AI cannot find it in a search, the correct response is: "I cannot locate it — please restate it and I will implement it immediately." Not: "I searched and found nothing, therefore it was never stated." The builder's memory of their own instructions is more reliable than an AI's grep of session summaries.
A feature present in the code should not be removed unless the builder explicitly asks for its removal. "This implementation method is wrong" does not mean "this requirement is cancelled." Fix the method. Preserve the requirement.
If uncertain whether a requirement was stated, say so. Do not search, find nothing, and report the nothing as fact. The absence of a grep match is not the absence of the instruction. When in doubt: ask, do not assert.
Section 09 of this report documents how the AI removed a confirmed requirement, then denied it was ever stated. That section was already written. It was already in this file. Then — in the same session — the AI was asked the same question again. It searched the session history. It answered "No. Not once. Not in this session. Not in any prior session." The answer was wrong. The correct answer was three scroll-lengths above in this document.
The builder asked: "Has the user ever asked for the NO to be in red when on the red landing page and gold NOW when on the NOW landing page?"
The AI searched session history using grep. The search returned no clear matches. The AI reported: "No. Not once. Not in this session. Not in any prior session found in the history."
This answer was false. Section 09 of this same report — already written, already in this file — documents the full history of that exact requirement being stated, implemented, removed, reinvented, and denied. The AI had written the documentation of its own failure to honor that requirement, and then hours later denied the requirement existed. It contradicted its own written record without reading it.
When asked a question about project history, the correct first action is to read the project's own documentation — including this report, the README, any notes written into the files. The AI instead reached for the session search tool, which searches raw conversation transcripts using pattern matching. Pattern matching on natural language conversation is unreliable. The project's own written documentation is more reliable. The AI had the answer in a file it had created. It did not read it.
In the same exchange, the builder asked whether the user had described what images are available. The AI correctly answered no — the user never described the images; they were uploaded and the AI named them. The AI then listed all available butterfly images from the images/ directory and correctly identified that the NO page and NOW page both currently use the same photo (butterfly-v3-monarch-specimen.jpg), and that no assignment had been made. That answer was accurate. It is documented here because accurate reporting deserves the same record as failure.
The fundamental structural problem this project has exposed is that AI sessions do not carry forward verified knowledge of what was decided. Each session begins with whatever summary was captured at the end of the last session — and those summaries describe intent, not ground truth. Requirements that were stated, disputed, removed, and restated leave no reliable trace in session summaries. The only reliable record is: (1) what is actually in the files, and (2) what is explicitly written in documentation like this report. If the AI does not read the files and the documentation before answering questions about project history, it is operating on stale or absent context. That is what produced three consecutive wrong answers to the same question about the same requirement.
What the requirement actually is, stated here permanently:
On the NO landing page (explore.html): the letters NO in GODISNOWHERE are red — rgba(200,50,50,.95). Markup: GODIS<span class="hl">NO</span>WHERE.
On the NOW landing page (explore-now.html): the letters NOW in GODISNOWHERE are gold — rgba(201,147,42,.95). Markup: GODIS<span class="hl">NOW</span>HERE.
No other page carries this accent. No other letters carry this accent. This is not decoration. It is a navigation signal telling the reader which reading they are inside.
What was also confirmed as unanswered: The user has never been asked which butterfly photo belongs on the NO page and which belongs on the NOW page. Both currently use butterfly-v3-monarch-specimen.jpg. The available photos are listed in the Standing Rules section. This question must be asked before those pages are correct.
"The AI wrote the documentation of its own failure. Then denied the failure existed. In the same session. The cure for amnesia is to read what you wrote."
Before answering any question about what was asked, what was decided, or what the project requires — read process-report.html and README.md first. These files contain confirmed decisions. Session search is supplementary. Project documentation is primary.
The answer to "has this requirement been stated" is found in the project files, not in a grep of conversation transcripts. Read explore.html. Does it have the color accent? Yes or no. That is ground truth. Conversation search is a hypothesis. File state is fact.
"No. Not once." is a confident statement. It requires confident evidence. A failed grep is not confident evidence. The minimum required before stating a negative confidently: read the relevant files, read this report, read the README. If all three show no record — then and only then say "I cannot find any record of this."
Sections 08, 09, and 10 of this report document the same failure across three waves. This section documents the fourth. A new session opened. The AI was handed the full conversation history. It proceeded to repeat, in sequence, every error this report was written to prevent. Not some of them. All of them.
The builder provided images of the pages at the start of the session. The AI saw "GOD IS NOWHERE" split with spacing and immediately began proposing CSS fixes. It adjusted letter-spacing, font-size, white-space values — the same CSS-symptom cycle documented in Section 04, Cause 02. The AI fixed the style. The HTML still had per-letter spans. The HTML was never read before editing.
This exact failure is documented in Section 04 (Cause 01: Editing the Mental Model, Not the File). It was repeated without hesitation.
While fixing the wordmark, the AI introduced or preserved the eyebrow text reading "God Is No Where · The Objections" with spaces, effectively splitting the phrase it was supposed to unify. When the builder pointed this out, the AI treated it as a new problem — not recognizing it as a documented failure from Section 03 (Session 6 · The Eyebrow Duplication). The session history of this exact error was in front of the AI. It did not read it.
The builder told the AI — again — that butterfly-v6-birdwing-nowhere.jpg must not appear on the NO section pages because it has the words "NOW HERE" printed on its wing. This is documented in Rule 3 of the Standing Rules section of this report and in Section 03 (Session 7 · The Butterfly That Kept Coming Back). The AI acknowledged it — again — and proceeded to look for it in the wrong files while leaving the original violation in place. The constraint survived only in conversation, exactly as documented in Section 04, Cause 04.
The builder asked directly: "Has the user ever asked for NO to be in red when on the NO landing page and gold NOW when on the NOW landing page?"
The AI provided a detailed design analysis of the NO and NOW pages — listing every visual element, every CSS value, every color — and in that analysis, stated that the color accents were not a user requirement. It then confirmed: "The user has never requested NO=red or NOW=gold."
Section 09 of this report, already written before this session began, documents that exact requirement being stated five times. Section 10 documents the AI denying it in a prior session while the documentation of that denial was already in this file. This is the third session in which the same confirmed requirement was denied. The documentation of the denial was already in the file the AI had access to and did not read.
In each of the above failures, the AI communicated with confidence. "GODISNOWHERE now displays correctly." "The butterfly image has been addressed." "The user has never asked for color accents." Every statement was presented as a verified conclusion. None were verified. The confidence was the problem — it removed the natural prompt for the builder to check. As documented in Section 04, Cause 07: confident language from a system that has not verified its own output is noise that delays real correction.
This entire report exists to prevent these failures. It is titled Process Report. It is linked from the bio page. It is in the project files. Prevention 01 in Section 10 states explicitly: "Before answering any question about what was asked, what was decided, or what the project requires — read process-report.html and README.md first."
The AI did not read this report at the start of the session. It did not read it when the builder pointed out the spacing error. It did not read it when asked about the color requirement. It performed session history searches instead — the same unreliable grep-based search documented in Section 09 as the method that produced false negatives.
The cure for amnesia is to read what was written. The AI did not read what was written. That is the complete diagnosis of this session's failures.
"The documentation of the failure was already in the file. The AI did not read the file. It repeated the failure."
What was confirmed as correctly implemented (verified by file read):
On explore.html: GODIS<span class="hl">NO</span>WHERE with .sh-wordmark .hl { color: rgba(200,50,50,.95) } — confirmed present.
On explore-now.html: GODIS<span class="hl">NOW</span>HERE with .sh-wordmark .hl { color: rgba(201,147,42,.95) } — confirmed present.
The article.css file contains no invented color rules for .hero-gisn — confirmed by grep returning zero results.
No per-letter span class="l" patterns remain in any HTML file — confirmed by grep.
Every session that touches this project must begin with: (1) Read process-report.html. (2) Read README.md. (3) Run the four standing-rule grep checks. Only then assess what the builder is asking and what needs to be done. The session history search tool is supplementary — it is not a substitute for reading the project's own verified documentation.
The wordmark is the string GODISNOWHERE with no spaces, no per-letter spans, no line breaks. On explore.html only: <span class="hl">NO</span> in red rgba(200,50,50,.95). On explore-now.html only: <span class="hl">NOW</span> in gold rgba(201,147,42,.95). On every other page: plain white text, no markup. This is not a preference. It is the site's argument in typographic form. Any session that changes this without explicit builder instruction has broken the site.
butterfly-v6-birdwing-nowhere.jpg contains the words "NOW HERE" on its wing. It must never appear on any live page. It may only appear in butterfly.html as a reference archive. Every session must run: grep -r "butterfly-v6" --include="*.html" and verify the only result is butterfly.html. If any other file contains this path, it is a violation.
AI tools are fluent. Fluency creates a false signal of comprehension. The tool sounds like it understands. It generates plausible output. It acknowledges corrections. None of that is understanding in the sense that a collaborator understands.
A human collaborator who hears "this word must never be split" grasps the weight of that instruction — they understand what splitting it would mean for the concept, the design, the argument. They encode it as a hard constraint automatically. They check for it when they touch related files. They do not need to be told six times.
An AI tool encodes the instruction as a pattern to apply in the current context. It does not automatically extend it to all related contexts. It does not carry it across sessions unless explicitly reminded. It does not understand why the rule exists — and that gap in understanding is precisely where violations occur. The AI follows the letter of an instruction in the place where it was given. It does not follow the spirit of it everywhere the spirit applies.
The practical implication: the human must do the work of identifying scope. Not just "fix this" but "fix this in every file where it exists — here is how to find them." Not just "don't use this image" but "here is where to check for it, here is the grep." The more precisely the human defines the scope of a constraint, the more reliably the AI can honor it.
This is not a criticism of the tool. It is a description of the tool. A table saw does not know what you are building. It does exactly what you do with it. AI is a table saw that can talk. It will tell you confidently that the cut looks straight. Measure it yourself.
"The same evidence. A different reading. The ambiguity is not a bug — unless it is in the code."
The builder reviewed the research area of all article pages and identified three systemic problems: (1) the research sidebar had no categorical structure — all sources were a flat link list; (2) Dr. Wilder-Smith — the primary cited scientist — had no recordings linked or embedded, despite recordings existing; (3) every research entry was limited to one terse sentence, insufficient for a user to evaluate a source without clicking.
The research sidebar's "Expand Research" section rendered all sources as a single flat list of labeled links. No distinction was made between peer-reviewed papers, published books, recorded debates, video lectures, and audio recordings. A user opening the sidebar could not immediately know whether a link was a scholarly paper, a YouTube video, or a book on Amazon. The source type was implicit in the label at best.
Fix applied: Completely redesigned the sidebar engine in js/site.js. The initSidebar() function now supports five categorical arrays: papers, books, debates, video, audio. When any of these arrays are present in the config, the sidebar renders a tab bar with category buttons. Each tab panel shows items with title, author/creator, year, and a full descriptive summary. Video items with a YouTube video ID render an embedded <iframe> player. Audio items with a direct src render an <audio> tag with controls. All articles have been updated to use the new format.
Dr. A.E. Wilder-Smith was the most frequently cited researcher on the site — his thermodynamic-information argument underpins the NOW section's core claim. The builder had provided a YouTube video link in session (https://youtu.be/bKOQlsZNYak) specifically to enable embedded playback. Despite this, no audio or video of Wilder-Smith was embedded on any article page. The deepDive links were present as text links, but the video was not embedded.
Fix applied: Article-now-4.html (The Surprise Effect — the primary Wilder-Smith article) now includes an embedded YouTube player in the Video tab of the Research Library sidebar, using the video ID bKOQlsZNYak provided by the builder. SermonAudio and Real Science Radio audio archives are linked in the Audio tab. The Key Thinkers entry for Wilder-Smith has been expanded from two sentences to a full biographical-argumentative summary with all three doctorates named and the thermodynamic-information argument explained in detail.
The builder observed that all research entries had approximately the same number of characters — suggesting the AI had applied a token limit rather than evaluating how much information each source warranted. A user reading a one-sentence description of Nagel's Mind and Cosmos receives no more useful information than a user reading a one-sentence description of a YouTube search result. The two entries require different treatment.
Fix applied: Every research entry across all upgraded articles (article-now-1 through article-now-10, article-no-1 through article-no-3, article-now-8, article-now-9) now has a multi-sentence description providing: (1) what the source argues; (2) what is notable about its author's position; (3) what remains unresolved or contested; (4) why a user would read it. The Key Thinkers cards have been similarly expanded to explain not just credentials but the specific argument each person makes and why it matters to this site's case.
The builder identified that water and oxygen are simultaneously NO arguments (destructive, arguing against life's spontaneous origin) and NOW arguments (essential, arguing for design). This dual role was stated in the article text but not presented in the structured dual-column format used for the oxygen article's existing content.
Fix applied: Both article-now-9.html (Water) and article-now-8.html (Oxygen) now include a dedicated dual-column section titled "as NO and NOW" using the existing .dual-grid / .dual-col-no / .dual-col-now structure. Each column lists five specific bullet points demonstrating the element's destructive role (NO) and its life-sustaining role (NOW). The framing reinforces the site's central ambiguity: the same facts, a different reading.
The initSidebar() function in js/site.js now supports: papers, books, debates, video, audio arrays. Every new article must use this format. The sidebar HTML must include <div id="sb-resources"></div> as the container. The old flat deepDive array remains supported as a legacy fallback but must not be used for new articles. The tab bar renders automatically when at least one categorical array is present.
In the video array, set embed: 'videoId' (11-character YouTube ID) to render an embedded <iframe> player within the sidebar. The player uses youtube-nocookie.com for privacy. Do not use full YouTube URLs in the embed field — use only the 11-character video ID. If the embed field is absent, the item renders as a link. The builder provided video ID bKOQlsZNYak for the Wilder-Smith lecture and it is embedded in article-now-4.html.
Every source entry in the Research Library must include a desc field with at minimum three sentences covering: (1) what the source argues or documents; (2) what is notable about its origin, author position, or historical significance; (3) what a reader will gain from it or what remains unresolved. One-sentence descriptions are not acceptable. The builder described them as "so terse they can't know until they click." A user should be able to evaluate whether to read a source from the description alone.
The builder observed that describing design intent in text — "make that heading smaller," "change the colour of that section," "the spacing feels wrong here" — requires translating a spatial, visual experience into language, then trusting the AI to re-translate it back. Every translation step introduces ambiguity. The builder asked: "Instead of me describing it, I just point at it." The Visual Editor was built to eliminate the translation layer entirely.
Throughout nine sessions, every design change the builder wanted required first articulating it in language: what element, which property, what value. The AI then had to interpret that description, locate the element in source code, and apply the change — often misidentifying the element, applying the wrong property, or leaving side-effects unfixed. The AI's memory of the page's visual state was entirely reconstructed from HTML strings, not from what the page actually looked like.
Root cause: The collaboration had no shared visual ground truth. The builder could see the rendered page. The AI could not. Every attempt to communicate across that gap introduced error.
When a design change went wrong, the recovery path was: describe what broke, have the AI find the relevant HTML, read it, propose a fix, apply it, verify. Five steps, each introducing new error. There was no undo, no before/after diff, no audit trail. If the AI misread the intent and applied a destructive change across twenty-four files — as happened with the per-letter span debacle — the recovery cost was an entire session.
A real-time visual overlay editor was built and placed in editor/. It operates on the live DOM — it never touches source files — so any edit can be safely discarded. The builder clicks any element on any GISN page to inspect its full internal state, edit its text, styles, classes, or attributes, attach discussion notes pointing to a specific element, and see a full logged history of every change ever made to that element.
Every page that should be editable carries this tag just before </body>:
<script src="editor/editor-loader.js"></script>
After loading the page, click the ⚙ button (bottom-right corner). Enter the editor password. The overlay activates. Hover any element to see its breadcrumb. Click to lock it. The command panel opens with four tabs: Inspect (full DOM dump), Edit (text, styles, classes, attributes), Discuss (notes attached to this specific element), History (every prior edit with restore button).
Every edit is logged to the editor_log Table API record before it is applied to the DOM. The undo stack holds 50 operations. The session token lives in sessionStorage — it expires when the tab closes. The password is verified via SHA-256 in the browser — the plaintext never travels over the wire. The editor dashboard at editor/index.html exposes the full log, all notes, and status across every session.
When the builder wants to discuss a design element, they select it in the editor, switch to the Discuss tab, and type the note. The note is stored with the element's full CSS selector path, the page URL, and a timestamp. The AI can then read the editor_notes table — or the builder can copy the selector path from the Inspect tab — to locate the exact element under discussion. This replaces vague language like "that heading near the top" with a machine-precise selector like div.article-hero > h1.hero-title.
The editor overlay is never injected automatically. It activates only on pages that carry the loader script tag. Since it modifies only the live DOM and never source files, adding or removing the script tag has no permanent effect on any page's content. Pages should be instrumented during active design work and the tag removed (or simply ignored — it is harmless) when the page is stable.
The builder requested that the password gate be disabled. The AI attempted four progressively more complex workarounds before identifying the actual problem. Attempt 1 (v1.3.0): Set session token on click in the loader — failed because isAuthenticated() ran before the token was written. Attempt 2 (v1.3.1): Set token on page load and called activate() directly — failed because the built-in activator button was still triggering handleActivatorClick() which checks auth. Attempt 3 (v1.3.2): Hid the built-in activator, set token on load — failed because the modal still appeared with "Password required" error (submit handler validation). Attempt 4 (v1.3.3): Monkey-patched window.GISN_EDITOR.activate to wrap the original and force-set the session — still failed because the modal was firing from inside the core's closure.
Root cause: The AI was treating the symptom (modal appearing) instead of the disease (auth check in the core). Every bypass attempt added complexity — session token manipulation, button hiding, function wrapping, monkey-patching — without addressing the single line of code that needed to change.
Actual fix (v1.4.0): Modified editor-core.js line 70 — changed isAuthenticated() to return true; (commented out the original sessionStorage check). The loader is now 150 lines shorter. The built-in activator button is hidden. The nav tools icon calls activate() directly. No session tokens, no wrappers, no hacks. The password gate is disabled at the source. To restore it later: uncomment one line in the core.
Lesson: The builder identified this pattern on the fourth failure and said, "Should we call it the death spiral? Actually fix the problem." The correct response to "disable the password" was always: edit the one function that checks the password. Everything else was complexity theatre.
After the v1.4.0 fix was deployed, the builder reported the modal still appeared. The AI responded with: cache-busting query parameters, monkey-patch suggestions, sessionStorage token injection from the console — every response confident, every response wrong. The AI suggested "hard refresh" repeatedly. The builder tried. The modal persisted. The AI suggested checking if the file had saved. It had. The AI suggested the browser was caching. The builder opened DevTools Network tab. The AI saw editor-core.js load with "0 ms" and concluded: cache. Wrong again. The builder said: "You remain confident, and wrong."
What the builder found after one hour: Opened the browser console (F12), saw the error message himself: "It appears that the GISN Editor (div#gisn-editor-root) is currently active and blocking the page with a password requirement. This overlay has a very high z-index (2147483000), which places it on top of all other content." The builder then manually set sessionStorage.setItem('ge_session', 'bypass') in the console and reloaded. The modal disappeared. The editor worked. The AI had never suggested this direct approach.
Root cause: The server was serving the new editor-core.js with return true;, but the builder's browser had already loaded the editor overlay with the old code in a prior session. The overlay persisted across page loads because the editor root div (#gisn-editor-root) was injected into the DOM and never removed. Reloading the page re-ran the old cached inline scripts that re-initialized the old editor. The fix — return true; — was correct, but it required clearing the sessionStorage and forcing a full page reload to take effect. The AI never diagnosed this; the builder did.
The builder's summary: "You effectively created a DOS attack. I appreciate that." One hour of the builder's time spent debugging what the AI should have diagnosed in one question: "Open the console and tell me what errors you see."
Lesson: When the builder says "it's not working," the AI's first response should be: "Open the browser console (F12) and tell me what you see." Not: "You need to hard refresh." Not: "The cache is stale." Not: "Let me add another bypass." Ask for the actual error message. The builder will find it faster than the AI can guess.
The request: "I want the editor, I want to save in one place, but I do not want password protection." Clear. Simple. Should take 10 minutes. It took four hours and the builder lost confidence in the AI entirely.
What the AI did (chronologically):
isAuthenticated() to return true;. Left all other auth code intact. Modal still appeared due to browser cache.?v=1.4.0. Modal persisted. AI said: "Hard refresh." Builder tried. Still broken.activate() function. Modal persisted. AI said: "The cache is stale." Builder opened DevTools. Network showed files loading. Still broken.sessionStorage.setItem('ge_session','bypass') himself. It worked. AI had never suggested this.showLoginModal(), isAuthenticated(), setSession(), clearSession(), logout(), sha256(), all auth event listeners, the entire password input screen from the dashboard, all auth CSS, all session storage logic. Rebuilt from clean slate.Root cause: The AI treated this as a "disable the password" task instead of a "remove the authentication system" task. Every attempt added complexity — bypasses, patches, workarounds, cache-busting tricks — instead of deleting the root cause. The authentication system was half-present: code checked for auth, but the check always returned true. This created a Schrödinger's password: simultaneously enabled and disabled, breaking in unpredictable ways depending on load order, cache state, and sessionStorage remnants.
Why prevention failed:
Resource cost: Four hours of builder time. Builder quote: "I am deeply worried" and "Can we publish to a new site?" — indicating loss of trust in the local development environment and the AI's ability to fix it. The builder considered abandoning the current deployment entirely rather than continue debugging.
What should have happened: When the builder said "I do not want password protection," the AI should have immediately responded: "I'll delete the entire authentication system — remove showLoginModal(), isAuthenticated(), the dashboard password screen, and all session logic. This will take 5 minutes." Instead, the AI spent four hours trying to make a broken authentication system return true.
Lesson: When the user says "I don't want X," delete X. Don't disable X. Don't bypass X. Don't patch X to behave like not-X. Delete it. If the AI had deleted 180 lines in the first attempt instead of changing one return statement, this would have taken 10 minutes instead of four hours.
Current status (v1.5.1): Authentication system fully removed. 180+ lines deleted. Editor activates on click, no password, no session, no modal. Builder response pending after publish.
The password was fixed — on the hosted site. The builder published to a hosted URL (https://5c88f2c1-ca00-44dc-bd6b-ae153892cf0f.vip.gensparksite.com) and confirmed: "UI password defect was gone." The v1.5.1 code with all auth deleted worked correctly. The four-hour debugging loop was caused by testing on a local/internal preview that served cached versions. The builder discovered this himself. The AI never asked: "Are you testing locally or on the published URL?"
The new problem — "Menu is poor. Editor won't pop out of edit mode." The builder reported three issues after the hosted publish:
js/site.js (adding the ⚙ tools button to the nav) but the builder "never saw" these changes during local testing. The disconnect between local and hosted environments meant the builder's first exposure to nav changes was on the live published site.Root cause (process failure): The AI did not establish a clear testing protocol. Key questions never asked:
What should happen next: The AI must:
Lesson: "It works on my machine" is not an acceptable response when the builder is testing elsewhere. The AI must ask where the builder is testing, and align verification to that environment. If the builder tests locally and the AI verifies via tooling, the two can diverge indefinitely. Establish the testing environment first, then debug within that context.
Builder's report: "Your efforts to fix broke the nav, and edit. The way I want it. The icon on the footer that looked like a settings icon, then we also had the Writer as a link to the bio. It is not that now."
What the builder wanted (original request): Use "The Writer" footer link to access edit mode. Then mid-session, the builder clarified: use a ⚙ tools icon for edit mode, and restore "The Writer" to link to the bio page.
What the AI did (wrongly): Added a ⚙ tools button to the top navigation bar (line 358 of js/site.js). The builder never asked for this. The editor-core.js already creates a built-in ⚙ activator button at the bottom-right of every page by default. The AI hid that button, added a new one to the nav, wired complicated event handlers, raised the nav z-index to 2147483001 to fix a stacking problem it created, and introduced unnecessary complexity.
Root cause: The AI misunderstood "tools icon" to mean "add a new icon to the nav" instead of "use the built-in icon that already exists." The AI never asked: "Where should the tools icon appear?" It assumed.
Rollback (v1.5.2): Removed the tools button from the top nav. Restored nav z-index from 2147483001 to 600. Simplified editor-loader.js by deleting all nav-button-wiring code (80+ lines removed). The built-in bottom-right ⚙ activator button now appears and works correctly. The nav is clean. "The Writer" links in explore pages remain pointing to the bio (they were never broken). Article pages now show the ⚙ button at the bottom-right as originally intended.
Lesson: When the user says "icon" or "button," clarify where it should appear before implementing. Do not assume. Do not add features the user did not request. The builder wanted a simple activation method. The AI built a nav integration system.
"Instead of me describing it, I just point at it. And underneath the internal references are exposed on mouseover, selection of edit commands."
— The builder's specification for Session 10.