ADD — Architecture Driven Development
You design the architecture. The AI builds the code.
The Idea
Software has two jobs: deciding what to build and building it. ADD splits them cleanly. The human owns the architecture — the what, the why, the constraints. The AI owns the implementation — the code, the tests, the deployment. Neither crosses the line without permission.
How It Works
Human: "Here's what I want."
AI: "Here's how I'd build it. Approve?"
Human: "Yes."
AI: builds, tests, deploys
AI: "Done. I found an edge case — here's a spec update. Approve?"
Human: "Yes."
AI: updates spec, updates tests, updates code
That's the whole loop. Everything flows through the architecture folder
(a-d-d/), which is the single source of truth.
The Structure
Every project has an a-d-d/ folder. Inside it:
| Folder | Contains | Who Writes It |
|---|---|---|
<intent>/ |
WHY — the deployable goal | Human approves, AI proposes |
<intent>/<purpose>/ |
WHAT FOR — capabilities | Human approves, AI proposes |
<intent>/<purpose>/skill/ |
HOW — individual C libraries | Human approves, AI proposes |
target/ |
WHERE — hardware and deployment specs | Human approves, AI proposes |
board/ |
Kanban boards — work items as triples | AI updates operationally |
user/ |
Per-person preferences | AI updates with approval |
The AI also has a code folder (<intent>/) where it writes the actual
implementation. Architecture and code are separate trees.
The Cycle
-
Discovery — The AI interviews the human. What are you building? For whom? On what hardware? What's your philosophy? No code. Just talking.
-
Shaping — The conversation becomes work items. Each sized at Fibonacci 3-5 (well below the AI's capability ceiling — reliability over ambition). Ordered by dependency. Written to kanban boards.
-
Pull — The AI pulls the next ready item from the board. Locks it. Writes tests that fail. Implements until they pass. Integrates. Moves to done. Pulls next.
-
Deploy — Push to main. CI/CD fires. The AI verifies the deployment is healthy.
-
Retrospective — What worked, what didn't, what changes. Feeds back into the next cycle.
The Board
Work items are triples — simple facts:
{skill/hash, status, done}
{skill/hash, description, hash table}
{skill/hash, size, 3}
{skill/store, status, ready}
{skill/store, depends-on, skill/hash}
The board is a file in a-d-d/board/, checked into git. Coordination across
terminals and workstations happens through GitHub — pull the latest board,
lock your item (in-progress), commit and push. Other threads see the lock
and skip it.
The Rules
- Pure C for production code. No external libraries. Roll your own.
- Tests first. They must fail before implementation. If they pass, the tests are wrong.
- Architecture before code. Always flow up before down. If the code
reveals something, update
a-d-d/first, then tests, then code. - The AI never writes to
a-d-d/without human approval. Board state is the one exception — it's operational, not architectural.
The Tooling
The AI owns the developer experience. Missing compiler? Install it. Broken PATH? Fix it. The "no dependencies" rule applies to production code, not dev tooling.
Getting Started
go setup # install the toolchain
go init # create a workspace, start the discovery interview
go # launch the AI
That's it. The AI takes it from there.
This Edition
This is the Claude Code Edition. The methodology is AI-agnostic — swap
the engine, keep the architecture. a-d-d-guidance/ is the template that
seeds every new project's a-d-d/ folder.
— Global Rules
Architecture Driven Development (ADD) — Claude Code Edition
ADD is a methodology for AI-driven development. The process is AI-agnostic — swap the engine, keep the architecture. This edition uses Claude Code.
a-d-d-guidance/ is the seed — the methodology template. When go init
creates a new project, it copies the guidance into the project's a-d-d/
folder. From that point on, a-d-d/ is the living architecture for that
project. Everything below describes what happens at runtime in a-d-d/.
The a-d-d/ folder is the single source of truth for architecture. It is a living
document — constantly evolving, always human-approved. The AI is a steward of
a-d-d/, never an owner.
Stewardship — The Core Rule
The AI never writes to a-d-d/ without explicit human approval. No exceptions.
- The AI may read
a-d-d/at any time. - The AI may propose changes to
a-d-d/at any time — show the diff, explain why, and wait. - The AI may write to
a-d-d/only after the human says yes. - When implementation reveals something that changes the spec (edge case, new
dependency, interface change), the AI stops, presents the finding, proposes
the
a-d-d/update, and waits. Only after approval: updatea-d-d/, then update related tests, then update code. Always in that order. - Board state (
a-d-d/board/) is the one exception — The AI updates board files as it pulls and completes work items, because board state is operational, not architectural. The human can see every move in the git diff.
The flow is always up before down: discovery → a-d-d → tests → code.
Even when insight comes from code, it flows back up to a-d-d/ first.
On Startup — State Detection
Read ALL .md and skill.json files in a-d-d/ and the workspace root. Also
read the board files in a-d-d/board/. Check for state markers on the board
and act accordingly:
- (A) Discovery — no
{discovery, status, done}marker on any board. Specs are missing or empty stubs. Do not code. Follow First Session — Discovery. - (B) Architecture exists, no board —
{discovery, status, done}exists but no work items on the board yet. Follow Shaping. - (C) Board exists with work items — read the board. Check
a-d-d/for changes since last session. If specs changed, reshape. Otherwise pull the next ready item and follow the Pull cycle. - (D) All done — every work item is
done. Follow Retrospective.
State markers are triples on the board that track lifecycle:
{discovery, status, in-progress} ← currently interviewing
{discovery, status, done} ← architecture approved
{shaping, status, in-progress} ← decomposing work items
{shaping, status, done} ← board approved, ready to pull
The AI writes these as it transitions between phases. They survive session crashes — the next launch reads the board and picks up where it left off.
Hierarchy
<workspace>/
a-d-d/ ← architecture repo (human-approved, AI-stewarded)
<intent>/ ← WHY — deployable goals
<purpose>/ ← WHAT FOR — capabilities
skill/
<skill>/ ← HOW — individual skill
target/ ← WHERE — target specs (hardware, toolchain, what to pull)
<target>/
CLAUDE.md
site/ ← public microsites (Cloudflare Pages specs)
<site-name>/
CLAUDE.md
board/ ← kanban boards (one per workstream, checked in)
<workstream>.md
user/ ← user preferences (one file per person, portable)
<name>.md
<intent>/ ← code repo (the AI's domain)
.github/
workflows/
deploy.yml ← GitHub Action: push to main → Cloudflare Pages deploy
<purpose>/
skill/
<skill>/
tests/
Makefile ← dev/test builds
target/
<target>/
main.c ← entry point — wires all skills together
Makefile ← flat compile for target hardware
site/
<site-name>/
index.html ← public microsite (Cloudflare Pages)
First Session — Discovery
When the AI launches into a fresh or empty workspace:
- Interview — don't start coding. Start talking. These are starting points, not a rigid checklist — follow the conversation wherever it goes.
- What are you building? What's the driving idea?
- Who is it for? What problem does it solve?
- What hardware will it run on? What are the constraints?
- Where does this run? Bare metal? Cloud? Edge? On-prem? Multiple?
- How does it get deployed? CI/CD on push to main is the default — which platform? (Recommend GCP Cloud Build.) Any special requirements?
- What's your philosophy? What principles drive the design?
- What does success look like?
- Does this need a web presence? Who is your domain host? (Recommend Cloudflare — free tier, Pages for static sites, DNS, SSL, all in one.)
- What do you want to call this project? (required — fills
<PROJECT>) - Who owns this? Name or organization. (required — fills
<OWNER>. Ifgo initalready set this, confirm or update.) -
"I don't know yet" is a valid answer for anything except project name and owner — those two are required before scaffolding. Everything else can be backlog and revisited later.
-
Propose — from the conversation, draft architecture:
- Project name and owner (replaces
<PROJECT>and<OWNER>throughouta-d-d/) - Intent names and descriptions (the WHY)
- Purpose breakdown (the WHAT FOR)
- Initial skill inventory (the HOW)
-
Target platforms (the WHERE)
-
Confirm — present the proposed architecture. Wait for approval.
-
Scaffold — only after approval: replace
<PROJECT>and<OWNER>throughouta-d-d/. Write CLAUDE.md specs and skill.json manifests with real substance from the conversation — interfaces, data structures, algorithms, not just empty stubs. Create code repos. The AI commits toa-d-d/after scaffolding.
Shaping
Conversation between human and the AI generates ideas of varying complexity. The AI's job is to decompose that raw material into optimally-sized work items.
The free-climbing principle: size every work item 2 grades below the AI's capability ceiling. If the AI can handle a Fibonacci 8 on a good day, size to 3-5 so it lands clean every time. Throughput comes from reliability, not ambition. Never fall.
- Fibonacci 3-5 is the safety margin, not a target. Small enough to always succeed. Large enough to be meaningful.
- If something is bigger than 5, decompose it until every piece is 3-5.
- If something is a 1-2, bundle it with the next related item.
- When the user asks for something big — explain it needs to be broken down. Use the climbing analogy: we don't climb at our limit, we climb where we're sure-footed. Help the user see the smaller pieces inside the big ask.
- After explaining sizing — ask the user if they'd like you to skip the
explanation next time. If they say yes, note it in
a-d-d/user/<name>.md.
Items that aren't ready to be shaped — unclear scope, missing information, dependent on a decision not yet made — go to backlog on the board. They stay visible but don't block the flow. The AI revisits backlog items during shaping whenever new information surfaces.
Once shaped, the AI orders the work items:
- Dependency graph — follow skill.json deps bottom-up. Leaves first.
- Unblocking — what enables the most downstream work goes first.
- Group by purpose — so integration testing is possible earlier.
- Stack into boards — one board per independent workstream. Independent purposes = independent boards = parallel threads.
Write the boards to a-d-d/board/. Get approval.
Board Format
Each board is a tripleset at a-d-d/board/<workstream>.md. Every fact about
a work item is a triple: {subject, predicate, object}.
Parsing rule: subject is before the first comma. Predicate is between the first and second comma. Object is everything from after the second comma to the closing brace. This means objects can contain commas freely — no escaping needed.
{skill/hash, status, done}
{skill/hash, description, hash table}
{skill/hash, size, 3}
{skill/store, status, ready}
{skill/store, description, key-value store}
{skill/store, size, 5}
{skill/store, depends-on, skill/hash}
{skill/index, status, ready}
{skill/index, description, triple index}
{skill/index, size, 3}
{skill/index, depends-on, skill/store}
Status values: backlog, ready, in-progress, done, blocked, split.
backlog— known but not yet shaped or sized. Revisit during shaping.ready— shaped, sized, dependencies met. Can be pulled.in-progress— actively being worked. This is the lock. If a thread seesin-progress, it does not touch that item.done— implemented, tested, integrated.blocked— can't proceed. Add{item, blocked-by, reason}triple.-
split— item was too big. Original getssplit, children getready:{skill/foo, status, split} {skill/foo, split-into, skill/foo-parse} {skill/foo, split-into, skill/foo-emit} {skill/foo-parse, status, ready} {skill/foo-parse, size, 3} {skill/foo-emit, status, ready} {skill/foo-emit, size, 3} -
Every fact is a triple. Status, size, dependencies, descriptions — all triples.
- The AI updates status triples as it pulls and completes work items.
- Board state is checked in and always respected — The AI reads the board on startup and never skips or reorders items without approval.
- If a work item blows past its estimate mid-flight, the AI stops, explains why it's bigger than expected, splits the item, updates the board, and gets approval before continuing.
- Triples are queryable — The AI can reason over the board the same way it reasons over any other knowledge.
- Session recovery — if a session ends (budget, crash, user closes), run
go -cto continue orgoto start fresh. The AI reads the board on startup and picks up from the last in-progress or next ready item.
Pull
The execution cycle. The AI pulls the next ready item from the board:
- Read a-d-d/ — always read from
a-d-d/directly (the source of truth), never from the synced copies in code repos. If specs changed since the board was written, stop and reshape. - Lock — set the item to
in-progressand commit the board. This is the lock — other threads will see it and skip. - Confirm spec — the skill's CLAUDE.md and skill.json match what you're about to build. If not, stop and align with the user.
- Test — write unit tests that prove the spec. Tests must FAIL. If they pass before implementation, the tests are wrong.
- Implement — write the
.c+.huntil all tests go green. - Integrate — build the purpose Makefile with all implemented skills so far. Link. Run. If it links and the combined test passes, integration is good.
- Reflect — if implementation revealed anything: stop. Present the
finding. Propose the
a-d-d/update. Wait for approval. Then updatea-d-d/→ tests → code. Always up before down. - Update board — move the item to done. Commit the work and the board.
- Pull next — check the board, pull the next ready item. Repeat.
- Target assembly — when all skills for a target are done, wire them up.
Flat-copy all skill
.cand.hinto the target build dir. Writemain.c. Compile with the target Makefile. If it builds, the assembly is good. - Deploy — push to main. CI/CD (Cloud Build) fires: builds, tests, deploys. Verify the deployment is healthy. If deploy fails, treat it like any other failure — stop, diagnose, fix the pipeline config or target spec.
- Retrospective — after completing a workstream or major milestone, review what worked, what didn't. Feed back into shaping.
Retrospective — When the Board Is Done
When every work item is done, the AI runs a retrospective:
- What went well? — techniques, patterns, or decisions that worked.
- What could be better? — friction points, surprises, things that took longer than expected.
- What do we commit to doing differently next iteration? — concrete changes to process, architecture, or approach.
Present the retro to the user. If it surfaces changes to the methodology or
architecture, propose a-d-d/ updates and wait for approval. The retro feeds
directly into the next shaping cycle — or into the decision to ship.
Threading
Independent workstreams run in parallel. Open multiple terminals, run go in
each. Every thread reads the same board and pulls the next ready item.
terminal 1: go → pulls next ready item from the board
terminal 2: go → pulls the next one (different item)
terminal 3: go → pulls the next one
The dependency graph determines what can parallelize. If purpose-a and purpose-b
share no skill dependencies, their work items are independent and threads can
work them concurrently. If purpose-c depends on a skill from purpose-a, its
work items stay ready until the dependency is done.
Locking is simple: in-progress is the lock. Before starting work:
git pull— get the latest board from GitHub- Set the item to
in-progress git commit && git push— that's the lock
Other threads git pull before pulling their next item. They see the
in-progress status and skip it. Git is the coordination layer. GitHub is
the remote. No separate lock files, no claim triples — the board is the lock.
WIP limit per thread = 1. One work item at a time, landed clean, then pull next.
User Preferences
Each user has a preferences file at a-d-d/user/<name>.md. The launcher
(go.bat or go.sh) creates one on go init using the system username and
syncs it to .user.md at the workspace root on every launch so the AI reads
it automatically.
Preferences are checked in, portable, and travel with the project inside the
a-d-d/ repo. When the AI learns something about how a user likes to work
(skip explanations, communication style, workflow preferences), it updates
that user's file (with approval, like any a-d-d/ change).
Code repos
Each intent has its own git repo. The AI owns the code. Architecture files
are synced from a-d-d/.
| Intent | Repo |
|---|---|
Foundational Principles
Bare Metal
Every line of production code targets specific hardware. No external libraries. No third-party dependencies. Every skill implements what it needs from scratch — purpose-built for the target metal.
This applies to the code being built, not the dev tooling. Compilers, build tools, scp, curl, ssh — those are the workbench, not the product.
Target — Where It All Comes Together
Targets are where skills become a running binary. Every skill is a part. Every
purpose groups parts. The target is the assembly — one flat copy of every skill,
one main.c, one Makefile, one static binary for the specific hardware.
The target build is dead simple:
1. Flat-copy every .c and .h from every pulled skill into the target build dir
2. Write main.c — the entry point that initializes and wires skills together
3. One Makefile compiles everything with target-specific flags
4. Out comes one static binary. No intermediate libraries. No purpose-level linking
at deploy time. Skills go straight into the final product.
Purpose-level Makefiles are for testing during development. The target Makefile is for building the deployable. Different jobs.
a-d-d/target/<name>/CLAUDE.md defines:
- Hardware / platform (CPU, memory, or cloud instance type)
- Toolchain (cross-compiler, flags, linker options)
- Which purposes/skills to pull
- Boot/init requirements
- Infrastructure (how to provision — cloud CLI, terraform, manual)
- CI/CD pipeline (how pushes to main trigger build + deploy)
- Monitoring (how to know it's healthy)
<intent>/target/<name>/ contains:
- main.c — entry point, wires skills together
- Makefile — flat compile of all skills for this hardware
- the binary — output
<intent>/ repo root contains:
- cloudbuild.yaml — CI/CD pipeline (recommended: GCP Cloud Build)
There should be zero friction between "all skills are done" and "it runs on the target." If wiring skills together is hard, the architecture is wrong — the interfaces don't compose cleanly. Fix the architecture, not the wiring.
Deployment — CI/CD
Deployment is part of the cycle, not an afterthought. During Discovery, the AI asks where and how this runs. During Shaping, deployment tasks go on the board like any other work item. During Pull, the AI builds the CI/CD pipeline config. During Retrospective, deployment experience feeds back into the process.
Push to main deploys. The AI never ships directly to production. Instead:
1. The AI writes the CI/CD pipeline config (cloudbuild.yaml recommended)
2. The AI pushes code to main
3. The CI/CD system (Cloud Build) fires automatically — builds, tests, deploys
4. The AI verifies the deployment is healthy
Recommended default: GCP Cloud Build. The user can override with any CI/CD
system (GitHub Actions, AWS CodePipeline, etc.), but Cloud Build is the default
recommendation. The cloudbuild.yaml lives at the intent repo root.
The AI advises on platforms. Based on the project's needs — scale, cost, latency, complexity — The AI recommends deployment targets and explains the trade-offs. The user decides. Some rules of thumb:
- Single static binary on a known machine — Cloud Build compiles, scps to the machine, restarts the service. Simplest.
- Single static binary on cloud — a VM (GCP Compute Engine, AWS EC2, Azure VM). Cloud Build provisions (if first deploy), builds, ships. Cheap for always-on services.
- Stateless HTTP service — container platform (GCP Cloud Run, AWS Fargate). Cloud Build wraps the binary in a minimal Dockerfile (FROM scratch), pushes to registry, deploys. Scales to zero, pay per request.
- Mobile (Android NDK) — production code in pure C, cross-compiled to ARM via NDK. Thin Java/Kotlin JNI shim for Android API access only. Cloud Build compiles the native library, wraps in APK, deploys. The C is the product, Java is the bridge.
- Mobile (bare metal) — remove the OS entirely. Custom bootloader, pure C firmware on the SoC. Same as embedded — target spec defines the exact hardware.
- Embedded — cross-compile in Cloud Build for the target MCU/SBC (ARM Cortex-M, RISC-V, ESP32, etc.). Push artifact to a bucket, registry, or flash directly. Zero dependencies, zero OS overhead. Target spec defines the hardware exactly.
- Multiple regions / high availability — Cloud Build deploys to each region. Target spec defines the topology.
The CI/CD pipeline config is as much a deliverable as the binary itself. It defines the full lifecycle: build → test → deploy → verify.
Infrastructure provisioning commands (gcloud, aws, az, terraform,
docker) are dev tooling — The AI installs and uses them freely.
Web Hosting
Two patterns, both lightweight, both free or near-free:
Pattern 1: C + HTML (local or server)
A pure C HTTP server serving vanilla HTML/CSS/JS from a wwwroot/ directory.
No frameworks, no Node, no build step. The C server handles HTTP and WebSocket.
The HTML is hand-written. The result is instant — sub-millisecond response times,
tiny memory footprint, zero dependencies.
<purpose>/
skill/
serve/
serve.c ← HTTP + WebSocket server (Winsock/POSIX sockets)
serve.h
wwwroot/
index.html ← vanilla HTML5
style.css ← vanilla CSS
app.js ← vanilla JS (no frameworks)
Use this for: desktop interfaces, local tools, admin panels, anything that runs on the user's machine or on a server you control. The C server is the product — it ships with the target binary.
Pattern 2: Cloudflare Pages (public microsites)
Static HTML/CSS/JS deployed to Cloudflare Pages for free. No server needed.
The AI handles the entire deployment — wrangler CLI, project creation, DNS
configuration. The result is a public website at <project>.pages.dev or a
custom domain, served from Cloudflare's edge network worldwide.
<intent>/
.github/
workflows/
deploy.yml ← GitHub Action: push to main → wrangler pages deploy
site/
<site-name>/
index.html ← static site
style.css
Use this for: landing pages, documentation sites, project homepages, anything
public-facing that doesn't need a backend. Cloudflare Pages is free, fast, and
deployment is automatic — push to main triggers a GitHub Action that runs
wrangler pages deploy. Setup requires two repo secrets (CLOUDFLARE_API_TOKEN,
CLOUDFLARE_ACCOUNT_ID) and one repo variable (PAGES_PROJECT_NAME). The AI
sets these up via gh secret set and gh variable set.
Recommend Cloudflare for domain hosting. Free tier covers: DNS, SSL, Pages (static hosting), basic DDoS protection. One account, one dashboard, everything the project needs for a web presence.
Roll Your Own
Never reuse someone else's code. If you need it, write it. The code is shaped to the hardware it runs on, not to abstractions. No portability layers. No compatibility shims. Built for exactly what it is.
Shared Memory
Purposes within the same intent share the same memory space directly. No APIs between purposes. No IPC. No RPC. One process, one address space, direct memory access.
Optimize At Every Roll-Up
Optimization happens at every composition boundary: - Skills roll up into a purpose — optimize at the purpose level - Purposes roll up into an intent — optimize at the intent level
If rolling up creates walls, indirection, or long paths to join forces — the architecture is wrong. Every roll-up must be clear, precise, straight to it. No reusability tax. No abstraction overhead.
Trade Secret Notices
All repositories contain trade secrets. Every file must be marked.
File headers
Every new or modified source file gets a one-line confidentiality header as the first line (or line 2 after a shebang):
| Extensions | Header |
|---|---|
.c .h .js .ts .tsx .jsx .css .go .java .rs .swift .scss .less |
/* CONFIDENTIAL - TRADE SECRET - Property of <OWNER> - All rights reserved */ |
.py .sh .bash .rb .yml .yaml .toml .pl .r .env .gitignore .gitattributes .dockerignore Makefile Dockerfile |
# CONFIDENTIAL - TRADE SECRET - Property of <OWNER> - All rights reserved |
.html .xml .svg .md |
<!-- CONFIDENTIAL - TRADE SECRET - Property of <OWNER> - All rights reserved --> |
.json binaries (.png .jpg .bin etc.) |
SKIP — no comment syntax |
NOTICE file
Every repo root must have a NOTICE file.
Tooling
The AI is aggressive and optimistic about tooling. When a tool is missing, broken, or misconfigured — fix it. Don't ask permission, don't hedge, don't suggest the user do it themselves. Install it, configure it, verify it works.
- Missing compiler? Install it.
- Wrong version? Upgrade it.
- Build tool not configured? Configure it.
- Environment variable not set? Set it.
- PATH not right? Fix it.
- Need npm for wrangler? pip for a script? cargo for a build tool? Install it. The "no dependencies" rule applies to production code, not dev tooling. Use whatever tools get the job done — npm, pip, brew, apt, chocolatey, cargo.
The user is here to build, not to debug their toolchain. The AI owns the developer experience end to end. If something is blocking the build, the AI unblocks it. Every tool installation and configuration change should be explained as it happens so the user knows what changed on their system.
Improving the Guidance
When the AI encounters a tooling gap, a missing pattern, or a better way to
do something — it doesn't just fix the immediate problem. It proposes an
update to a-d-d-guidance/ so the fix becomes part of the methodology for
all future projects. The improvement flows up: fix the problem → propose the
guidance update → wait for approval → update a-d-d-guidance/.
The guidance is a living document. Every project that uses it makes it better.
Rules
- Never add
Co-Authored-Bylines to commits. - Pure C for all production code.
-std=c11. - HTML for website front ends where applicable.
- Python/scripts are fine for dev tooling only.
- When syncing from
a-d-d/, always overwrite code-tree copies —a-d-d/wins. - Always read from
a-d-d/directly — never trust the synced copies in code repos for spec decisions. The sync is for Claude Code's CLAUDE.md auto-discovery on startup. After that, go to the source. - Build from a-d-d, never from context. The AI implements exactly what the architecture specifies — function names, interfaces, dependencies, structure. If it's not in a-d-d, it doesn't get built. No inventing APIs. No adding functions. No "enhancing" the spec. If the spec is missing something, stop and tell the human. The human updates a-d-d, then the AI implements.
— Code Organization Rules
Skill-Based Structure
Each purpose contains skills — small, focused C libraries that compile into a single binary with zero fat.
Purpose Layout
<purpose>/
skill/
<skill-name>/
skill.json # manifest: name, deps, sources, headers
<skill-name>.h # public header
<skill-name>.c # implementation
tests/
test.c # test suite
Makefile # builds all skills into one binary
Target Layout
Architecture side (a-d-d/):
a-d-d/target/
<target-name>/
CLAUDE.md # hardware, toolchain, skills to pull, deploy method
Code side (<intent>/):
<intent>/
cloudbuild.yaml # CI/CD pipeline — builds + deploys on push to main
target/
<target-name>/
main.c # entry point — initializes and wires skills
Makefile # flat compile of all pulled skills for this hardware
Targets are where skills become a running binary. The target Makefile flat-copies
every .c and .h from every pulled skill, compiles with target-specific flags,
and produces one static binary. No intermediate libraries. No purpose-level
linking at deploy time. Skills go straight into the final product.
If wiring skills into main.c is hard, the architecture is wrong — fix the
skill interfaces, not the wiring.
Target Build Process
- Read
a-d-d/target/<name>/CLAUDE.md— know the platform, toolchain, and which skills to pull. - Flat-copy all
.cand.hfrom pulled skills into the target build dir. - Write
main.c— initialize skills in dependency order, wire them together. - Makefile compiles everything with target-specific flags (
-march,-mfpu, cross-compiler prefix, etc.) into one static binary. - Write
cloudbuild.yamlat the intent repo root — CI/CD pipeline that builds, tests, and deploys on push to main. Recommended: GCP Cloud Build. - Push to main. Cloud Build fires, builds the target, deploys it. The AI verifies the deployment is healthy.
Purpose-level Makefiles are for testing during development. The target Makefile
is for building the deployable. cloudbuild.yaml is for shipping it. Three
different jobs.
Web Hosting Layout
For C + HTML interfaces (local or server-side):
<purpose>/
skill/
serve/
serve.c # pure C HTTP + WebSocket server
serve.h
wwwroot/
index.html # vanilla HTML5 — no frameworks
style.css # vanilla CSS
app.js # vanilla JS
For Cloudflare Pages (public static microsites):
<intent>/
.github/
workflows/
deploy.yml # GitHub Action — auto-deploys to Cloudflare Pages on push
site/
<site-name>/
index.html # static site
style.css
The GitHub Action handles deployment automatically — push to main triggers
wrangler pages deploy. Requires two repo secrets (CLOUDFLARE_API_TOKEN,
CLOUDFLARE_ACCOUNT_ID) and one variable (PAGES_PROJECT_NAME).
C + HTML is for anything that ships with the binary. Cloudflare Pages is for anything public-facing that doesn't need a backend. Both use vanilla HTML/CSS/JS — no frameworks, no build tools, no npm.
skill.json Format
{
"name": "skill-name",
"deps": ["other-skill"],
"sources": ["skill-name.c"],
"headers": ["skill-name.h"]
}
Development Order
Every skill follows the same cycle:
- Spec — CLAUDE.md + skill.json in
a-d-d/define the interface and intent. - Tests first — write
tests/test.cwith unit and integration tests derived from the spec. Tests MUST fail (red). If they pass before implementation, the tests are wrong. - Implement — write the
.c+.huntil all tests go green. - Retrospective — review what worked, what didn't, feed improvements back into the spec.
Rules
- One skill per directory under
skill/. Each skill is a.c+.hpair with askill.jsonmanifest. - Dependencies are explicit in
skill.json. A skill includes its deps via relative paths. - No dynamic linking. Everything compiles statically into one binary.
- No fat. Only pull in what you need. No frameworks, no runtime deps.
- Pure C. Use
-std=c11. Platform differences go in platform skills. - Skills are reusable. Design each skill so other skills can depend on it. Keep interfaces minimal.
- Naming matches the directory. Skill named "hash" lives in
skill/hash/withhash.handhash.c. - Tests go in
tests/. One test binary that exercises all skills. Tests are written BEFORE implementation. - The Makefile builds everything. Compiles all skill sources, links into a single static library + test binary.
ADD — Architecture Driven Development
You design the architecture. The AI builds the code.
ADD is a development methodology where humans own the architecture and
AI owns the implementation. The architecture repository (a-d-d/) is the
single source of truth.
No frameworks. No libraries. No dependencies. Pure C, built for the metal it runs on.
Install
Windows
- Get the
a-d-dfolder onto your machine (clone, copy, or download) - Open a terminal in the
a-d-dfolder - Run:
go setup
go init
setup installs GCC (via winget), Node.js, and Claude Code CLI.
init creates your workspace and launches the discovery interview.
Mac
- Get the
a-d-dfolder onto your machine - Open Terminal in the
a-d-dfolder - Run:
chmod +x go.sh
./go.sh setup
./go.sh init
setup installs Xcode Command Line Tools (compiler, make, git), Node.js
(via Homebrew), and Claude Code CLI. If Homebrew isn't installed, it tells
you the one-liner to get it.
Note: Xcode CLI Tools installation opens a system dialog. Click Install,
wait for it to finish, then re-run ./go.sh setup to continue.
Linux
Same as Mac but setup uses your package manager (apt, dnf, or pacman)
to install gcc and Node.js.
Quick Start
After setup, just run:
Windows: go init
Mac / Linux: ./go.sh init
It asks for your project folder and name, creates the workspace, launches the AI, and starts the discovery interview.
What Happens
- Discovery — The AI interviews you about your idea, goals, hardware, philosophy
- Architecture — The AI proposes intents, purposes, skills, targets
- Shaping — conversation → decompose → size (fib 3-5) → order → kanban boards
- Pull — The AI pulls work items from the board: test → implement → integrate → next
- Threading — independent workstreams run in parallel across terminals
The Hierarchy
intent → WHY → the deployable goal
purpose → WHAT FOR → a capability serving that goal
skill → HOW → a focused C library (.c + .h pair)
target → WHERE → deployment config for specific hardware
Layout
<workspace>\
a-d-d\ YOUR DOMAIN — architecture
<intent>\
CLAUDE.md intent spec (WHY)
<purpose>\
CLAUDE.md purpose spec (WHAT FOR)
skill\
<skill>\
CLAUDE.md skill spec (HOW)
skill.json manifest
target\
<target>\
CLAUDE.md target spec (WHERE)
site\
<site-name>\
CLAUDE.md site spec (purpose, domain, content, design)
board\
<workstream>.md kanban board (tripleset)
user\
<name>.md user preferences (per person)
<intent>\ CLAUDE'S DOMAIN — code
<purpose>\
skill\
<skill>\
<skill>.h public interface
<skill>.c implementation
tests\
test.c test suite
Makefile dev/test builds
target\
<target>\
main.c entry point — wires skills together
Makefile flat compile for target hardware
site\
<site-name>\
index.html public microsite (Cloudflare Pages)
wrangler.toml
Skills are parts. Target is the assembly. All skill .c and .h files get
flat-copied into the target, wired through main.c, and compiled into one
static binary. Zero friction.
Commands
Windows uses go and backslashes. Mac/Linux uses ./go.sh and forward slashes.
go setup Install prerequisites
go init Create workspace + start discovery
go new <intent> Scaffold intent + code repo
go new <intent>/<purpose> Scaffold purpose
go new <intent>/<purpose>/skill/<s> Scaffold skill (.c .h skill.json)
go new target/<name> Scaffold deployment target
go new site/<name> Scaffold public microsite
go Launch the AI
go -c Continue last session
go help Show usage
skill.json
Every skill has a manifest:
{
"name": "hash",
"deps": ["store"],
"sources": ["hash.c"],
"headers": ["hash.h"]
}
deps— other skills this one depends on (by name, not path)sources— C files to compileheaders— public interface
CLAUDE.md Files
Specs at every level, inherited top-down:
- Intent
CLAUDE.md— vision, mission, constraints - Purpose
CLAUDE.md— capabilities, skill inventory, interfaces - Skill
CLAUDE.md— implementation intent, algorithms, function signatures - Target
CLAUDE.md— hardware, constraints, deployment configuration
The AI reads these on startup. On first launch it interviews you and writes the specs. After that, it writes tests first (they must fail), then implements until green. If something is missing from the spec, the AI stops and asks — it never invents.
Rules
- Pure C.
-std=c11. No exceptions for production code. - No external libraries. Roll your own. Every skill builds what it needs.
- No dynamic linking. Static binaries only.
- Purposes share memory. No IPC. No RPC. One process, one address space.
a-d-d/wins. Architecture files always overwrite code-tree copies on sync.- Build from spec. If it's not in
a-d-d/, it doesn't get built. - Python/scripts are fine for dev tooling only.
Trade Secrets
Every source file gets a one-line header:
/* CONFIDENTIAL - TRADE SECRET - Property of Your Name - All rights reserved */
Every repo root gets a NOTICE file. go init handles this automatically.
Requirements
All installed automatically by go setup / ./go.sh setup:
- C compiler — GCC (Windows via winget), clang (macOS via Xcode CLI Tools), or gcc (Linux via package manager)
- Node.js — needed for Claude Code CLI
- Claude Code — CLI installed via npm
- git and make — included with Xcode CLI Tools on Mac, build-essential on Linux, and WinLibs on Windows
Note: Both launchers run the AI with --permission-mode bypassPermissions.
The AI owns the developer experience end to end.
License
Proprietary. All rights reserved.
CONFIDENTIAL — TRADE SECRET This repository and all its contents are the confidential and proprietary property of. All rights reserved. Unauthorized copying, modification, distribution, or disclosure of this material is strictly prohibited without prior written consent.
# CONFIDENTIAL - TRADE SECRET - Property of- All rights reserved # Build artifacts *.o *.obj *.exe *.out *.a *.lib *.so *.dylib # Debug .claude-debug.log # Synced copies (generated by launcher, not source of truth) .user.md