AI doesn't correct structural deficiencies — it magnifies them. The quality of your system directly determines the quality of the results you get when working with AI agents, LLMs, and automated tooling.
This isn't about individual productivity. It's about designing a development system where AI can operate under adequate structural conditions: explicit context, clearly defined contracts, reliable automated validations, reduced ambiguity, and complete traceability.
Here are the principles, standards, and practices we follow at BlackBox Vision to structure software projects optimized for AI-assisted development.
1. Architecture: monorepo with a deliberately simple structure
Every project should be organized as a monorepo, managed with Turborepo or an equivalent tool.
Architectural principles
- Clear separation of responsibilities between packages
- Consistent and explicit naming conventions
- Structure easily navigable from the CLI
- Well-defined contracts between modules
- Low coupling and high cohesion
- Minimization of implicit dependencies
Why this matters for AI
A monorepo enables agents and automated tools to:
- Understand internal dependencies
- Identify architectural boundaries
- Execute tasks in a localized way
- Operate without requiring complex inference
The monorepo reduces contextual fragmentation and improves automated reasoning about the complete system. When an AI agent can see the full dependency graph in a single workspace, it makes better decisions.
2. Mandatory structural documentation
Every project must include a structured README containing, at minimum:
- Product context
- Problem it solves
- General architecture
- Tech stack
- Main features
- Available commands
- Local setup
- Deployment process
- Relevant internal conventions
The README is a structural input for agents — not just documentation for humans. Where appropriate, it should be complemented with persistent context files such as:
AGENTS.mdCLAUDE.md- Equivalent context injection files
The goal is to allow any AI tool to reconstruct the mental model of the system without depending on tacit knowledge. If someone needs to explain your project verbally for an AI to be useful, you have a documentation problem.
3. Strict script normalization
Every project must standardize, at minimum, the following scripts:
linttypecheckbuildtest
Each sub-project within the monorepo must align its scripts with the global pipeline.
Why standardize scripts?
- Standardize validation checkpoints
- Enable systematic execution by agents
- Reduce dependence on manual discipline
- Establish executable contracts for the system
Scripts constitute the formal interface between the code and automated validation mechanisms. When an AI agent runs pnpm test and gets a clean exit, that's a verified contract — not a hope.
4. Commit conventions and versioning
Use commitlint (or an equivalent tool) to ensure automatic compliance with a standard commit format.
Benefits
- Readable history with clear semantic traceability
- Compatibility with semantic versioning
- Reliable automatic changelog generation
- Automated historical analysis by agents
Consistency in commits is essential for preserving temporal coherence and enabling automated analysis. An AI agent reading your git history should be able to understand what changed, why, and when — without guessing.
5. Pull request process based on impact levels
The PR review process should be proportional to the structural impact of the change.
In AI-optimized environments, where the pipeline runs exhaustive validations, human review stops serving a basic technical function and shifts to a role of architectural governance and strategic oversight.
5.1 PR classification
Every PR must be explicitly classified by impact level.
Level 1 — Operational (auto-merge permitted)
Changes that:
- Don't modify public contracts
- Don't alter domain entities
- Don't introduce new dependencies
- Don't modify boundaries between packages
- Constitute local refactors or scoped improvements
If the pipeline passes lint, typecheck, build, test, and static analysis — the PR can be approved automatically.
A high percentage of Level 1 PRs indicates solid modular architecture, effective separation of responsibilities, low coupling, and capacity for incremental evolution.
Level 2 — Structural (mandatory review)
Changes that:
- Modify domain models
- Alter contracts between modules
- Introduce new external dependencies
- Modify architectural boundaries
- Impact multiple monorepo packages
- Affect core product components
These PRs require mandatory human review focused on architectural coherence, consistency with prior decisions, systemic impact, and introduced complexity.
The goal isn't to validate syntax — it's to preserve structural direction.
Level 3 — Strategic (review + formal discussion)
Changes that:
- Introduce new structural patterns
- Modify foundational architectural decisions
- Impact the product's technical strategy
- Affect roadmap or positioning
- Involve significant system restructuring
These are long-term impact decisions. They require human review, explicit discussion, and prior alignment.
5.2 Architectural health indicator
PR distribution works as an indirect metric of system maturity:
- Majority Level 1 → Modular, healthy architecture
- Some Level 2 → Controlled evolution
- Few Level 3 → Deliberate strategic changes
If most PRs are Level 2 or 3, it's a warning signal: excessive coupling, insufficiently defined boundaries, fragile design, or a need for structural refactoring.
A well-designed system allows local changes without broad structural impact.
5.3 Guiding principle
Automation validates technical correctness. Human review validates architectural direction.
6. Shared automations and skills
Efficiency when working with agents increases when the team shares standardized automations and workflows.
Example — a ship skill:
- Creates a branch
- Generates a commit with standard format
- Pushes
- Opens or updates a PR automatically
Goals
- Reduce operational friction
- Minimize repetitive errors
- Standardize flows
- Eliminate unnecessary decisions
Automating repetitive tasks frees cognitive capacity for strategic decisions. When your team spends zero time on git ceremony, they spend all their time on the actual problem.
7. Security, dependencies, and automated review
Incorporate continuous analysis tools within the PR pipeline:
- SonarQube — static analysis and vulnerability detection
- Dependabot — automatic dependency monitoring
- CodeRabbit — automated initial PR review
These tools complement AI work and reinforce the system's structural quality. They're not replacements for AI agents — they're guard rails that make AI agents more effective.
8. Shared infrastructure (MCPs and extended tooling)
Standardize MCP configurations and extended tooling within the team to ensure homogeneous access to advanced capabilities.
Key tools:
- Playwright — end-to-end tests with a real browser
- GitHub API — programmatic repository operations
- Context7 — structured documentation optimized for LLM consumption
Standardizing tooling reduces operational divergence and facilitates AI-assisted collaboration. When every developer has the same MCP servers configured, AI agents behave consistently across the team.
9. TDD as the starting point
When working with code-generating agents, start every implementation under TDD (Test-Driven Development) discipline.
Benefits in an AI context
- Defines explicit intent
- Reduces functional ambiguity
- Enables immediate validation
- Improves the iterative cycle: agent → validation → adjustment
The test functions as a verifiable contract of the specification. When you hand an AI agent a failing test and ask it to make it pass, you get dramatically better results than asking it to "implement feature X."
10. SDD (Specification-Driven Development)
AI requires clear specifications of intent.
SDD means:
- Defining the problem to solve with precision
- Describing structured inputs and outputs
- Establishing verifiable acceptance criteria
- Delegating implementation once intent is defined
This approach replicates the operating model of mature engineering teams: intent first, execution second. The clearer your specification, the less the AI needs to guess — and the better the output.
What does an AI-optimized project look like?
An AI-optimized project must have:
- Explicit architecture — clear boundaries, low coupling, high cohesion
- Structured documentation — README, CLAUDE.md, context files that AI can consume
- Standardized scripts — lint, typecheck, build, test as executable contracts
- Automated validations — pipelines that catch issues before humans need to
- Strict conventions — commit format, naming, file structure
- Traceable processes — every change is auditable and understandable
- Proportional governance — review effort matches structural impact
- Clear specifications — intent defined before implementation begins
AI amplifies the existing system. The structural advantage emerges when the system is explicit, coherent, and deliberately designed.