# Makechain > A realtime decentralized protocol for ordering and storing git-like messages ## Architecture Makechain uses a layered architecture with single-chain Simplex BFT consensus, parallel per-project execution, and a separate data availability layer. ### System overview Makechain architecture: clients connect via gRPC to the validator node containing API, mempool, Simplex BFT consensus, execution engine, state store, and DA layer ``` ┌─────────────────────────┐ │ Clients │ grpc-web / gRPC │ (Browser, CLI, SDK) │ └───────────┬─────────────┘ │ ┌───────────▼──────────────────────────────────────┐ │ Validator Node │ │ │ │ ┌──────────┐ ┌──────────────┐ ┌────────────┐ │ │ │ gRPC API │→ │ Mempool │→ │ Execution │ │ │ │ (tonic) │ │ │ │ Engine │ │ │ └──────────┘ └──────┬───────┘ └─────┬──────┘ │ │ │ │ │ │ ┌───────▼────────────────▼───────┐ │ │ │ Simplex BFT (single chain) │ │ │ │ ~200ms blocks, ~300ms finality │ │ │ └───────────────┬────────────────┘ │ │ │ │ │ ┌───────────────▼────────────────┐ │ │ │ State Engine (MemoryStore) │ │ │ │ Prefix-namespaced key-value │ │ │ └────────────────────────────────┘ │ └───────────────────────────────────────────────────┘ ``` ### Layers #### Message Layer Every message is a self-authenticating envelope containing a BLAKE3 hash, Ed25519 signature, and the signer's public key. Messages are structurally validated before entering the mempool. #### Consensus Layer A single Simplex BFT consensus chain orders all messages. The leader proposes blocks by draining the mempool, and the execution engine processes them in two phases. Two-phase execution: serial account pre-pass, then parallel project execution via overlay stores, merging diffs into a BLAKE3 state root The two phases are: 1. **Account pre-pass** — KEY\_ADD, KEY\_REMOVE, ACCOUNT\_DATA, VERIFICATION\_ADD/REMOVE, PROJECT\_CREATE, PROJECT\_REMOVE, and FORK are applied serially (they modify shared account state like project count) 2. **Parallel project execution** — Remaining project-scoped messages are grouped by `project_id` and executed in parallel via rayon, each with its own copy-on-write overlay store This single-chain model with parallel execution achieves high throughput without the complexity of cross-shard coordination. #### State Layer State is stored in a prefix-namespaced key-value store with lexicographic ordering for range scans: | Prefix | Namespace | | ------ | -------------------------------- | | `0x01` | Project state | | `0x02` | Project metadata | | `0x03` | Refs | | `0x04` | Commits | | `0x05` | Collaborators | | `0x06` | Account state | | `0x07` | Account metadata | | `0x08` | Key entries | | `0x09` | Verifications | | `0x0A` | Project name index | | `0x0B` | Key reverse index (pubkey → mid) | Currently backed by an in-memory BTreeMap (`MemoryStore`). The `StateStore` trait is designed for future migration to QMDB (Queryable Merkle Database) for persistence and merkle proofs. #### Data Availability Layer The consensus layer stores only message metadata (\~100-500 bytes). File content (blobs, trees) lives in a separate DA layer, referenced by `da_reference` in commit bundles. ### Commonware Primitives Makechain builds on the [Commonware Library](https://commonware.xyz): | Primitive | Usage | | ------------------------- | ---------------------------------------- | | `commonware-consensus` | Simplex BFT consensus engine | | `commonware-p2p` | Authenticated peer connections | | `commonware-parallel` | Execution strategies (Sequential, Rayon) | | `commonware-runtime` | Async task execution (tokio backend) | | `commonware-cryptography` | Ed25519 signing, BLAKE3 digests | | `commonware-codec` | Binary serialization | ### gRPC API The node exposes a gRPC service on port 50051 (configurable) with: * **grpc-web support** — browser clients via HTTP/1.1 (tonic-web) * **CORS** — configured for cross-origin grpc-web requests * **Server reflection** — runtime service discovery (grpc reflection v1) * **Message streaming** — `SubscribeMessages` with type and project\_id filters ## Building with AI Makechain documentation is built with AI-first principles, providing multiple ways for LLMs and AI assistants to access protocol documentation. ### llms.txt The docs automatically generate [`llms.txt`](https://llmstxt.org/) files for LLM consumption: * [`/llms.txt`](https://makechain.pages.dev/llms.txt) — a concise index of all pages with titles and descriptions * [`/llms-full.txt`](https://makechain.pages.dev/llms-full.txt) — complete documentation content in a single file These files are generated at build time and served at the root of the site. Use `llms-full.txt` to give an AI assistant full context on the Makechain protocol in one shot. ### Ask AI Every documentation page includes an "Ask in ChatGPT" button that opens the current page context in ChatGPT with a Makechain-aware prompt. The dropdown also lets you copy the raw page content for pasting into any AI assistant. Use + I (macOS) or Ctrl + I (Windows/Linux) to quickly access the AI menu. #### Using with Claude Copy the page content (or the full `llms-full.txt` URL) and paste it into [Claude](https://claude.ai): ``` Read the Makechain protocol documentation at https://makechain.pages.dev/llms-full.txt and answer my questions about it. ``` #### Using with ChatGPT Click the "Ask in ChatGPT" button on any page, or open ChatGPT and provide the docs URL: ``` Research this page: https://makechain.pages.dev/protocol/overview and help me understand the message semantics. ``` ### Markdown access Any documentation page can be accessed as raw Markdown by appending `.md` to the URL: ``` https://makechain.pages.dev/protocol/messages.md ``` This provides better token efficiency and easier parsing for LLMs compared to HTML. ### Claude Code Add the Makechain docs as context for [Claude Code](https://claude.ai/claude-code) sessions: ```bash # Add llms-full.txt as project context claude context add https://makechain.pages.dev/llms-full.txt ``` Or reference the protocol specification directly: ```bash # The full specification lives in the repo cat protocol/SPECIFICATION.md ``` ### Example prompts Here are effective prompts for working with Makechain using AI assistants: #### Protocol understanding ``` What are the differences between 1P and 2P message semantics in Makechain? How does the compare-and-swap mechanism work for REF_UPDATE? ``` #### Building on Makechain ``` Show me how to construct and sign a PROJECT_CREATE message using Ed25519. How do I verify an Ethereum address claim signature? ``` #### Architecture questions ``` How does the two-phase execution model work? Why are some messages processed serially in the account pre-pass? ``` #### Debugging ``` I'm getting a StorageLimitExceeded error when creating a project. What are the storage limits and how do I check my capacity? ``` ### Protocol specification The complete protocol specification is available in the repository at [`protocol/SPECIFICATION.md`](https://github.com/officialunofficial/makechain/blob/main/protocol/SPECIFICATION.md). This is the canonical reference — the documentation site is derived from it. For AI assistants working with the codebase, the [`CLAUDE.md`](https://github.com/officialunofficial/makechain/blob/main/CLAUDE.md) file provides build commands, architecture overview, module descriptions, and conventions. ## Changelog Notable changes to the Makechain protocol, node, and documentation. Organized by month and grouped by category. ### February 2026 #### Features * **Idle consensus throttling** — drop the oneshot sender in `propose()` when the mempool is empty, using Simplex BFT's nullification mechanism to throttle idle rounds from \~100+/sec to \~5/sec * **QMDB runtime storage directory** — configure commonware runtime's `storage_directory` for QMDB partition placement, replacing manual directory management * **Cold-start retry** — gateway retries on empty gRPC response bodies during container cold start, ensuring reliable responses even when the node is booting * **Subscriber robustness** — signal lagged subscribers and enforce per-connection subscription limits to prevent resource exhaustion * **Per-page markdown generation** — enable AI agents to fetch individual documentation pages as markdown via `.md` URLs * **Ask AI dropdown** — custom AI integration menu in docs with ChatGPT, Claude, copy-as-markdown, and llms.txt links * **Interactive demos** — six interactive demo pages (register account, create project, push commits, verify identity, fork project, manage access) with a reusable Demo component system * **Design system** — 55 shape SVGs, brand guidelines, color system, typography scale, component library, and writing guide * **Health endpoints** — `/healthz` and `/readyz` HTTP endpoints on the metrics server for load balancer integration * **Prometheus gossip metrics** — P2P monitoring with broadcast/receive counters by outcome * **Consensus event metrics** — track proposal, verification, and commit events * **Validator key file** — `--validator-key-file` flag for production key loading (alternative to `--seed`) * **Structured startup logging** — startup completion log with timing breakdown * **AddVerification CLI** — `add-verification` command for linking external addresses * **ListKeys RPC** — paginated key listing by account * **Multi-validator flags** — `--bootstrapper` and multi-participant CLI flags for the node binary #### Fixes * **Empty block disk exhaustion** — stop infinite empty block production that caused disk exhaustion from snapshots; re-enable container snapshots with the default interval * **Snapshot restore logging** — log the state root hash when restoring from snapshots instead of discarding it; remove redundant `entries().count()` call * **QMDB empty diff skip** — skip QMDB persistence for blocks with no state changes as defense-in-depth * **NOT\_FOUND for missing resources** — return proper gRPC `NOT_FOUND` status for missing accounts and commits instead of empty responses * **Cursor pagination** — fix 0xFF byte boundary bug in cursor-based pagination * **Search pagination** — fix `search_projects` pagination and use saturating arithmetic for stats counters * **Verification count gate** — enforce verification limit before processing claim * **Empty block liveness** — ensure empty blocks still advance the chain * **Lazy account init** — initialize account state on first key registration instead of eagerly * **Nonce overflow** — use saturating addition for ref nonces to prevent overflow * **Commit count inflation** — fix double-counting in commit statistics * **Name collision on restore** — check name uniqueness when restoring a removed project * **Ref type immutability** — prevent changing a ref's type (branch vs tag) after creation * **Owner-as-collaborator guard** — reject `COLLABORATOR_ADD`/`REMOVE` targeting the project owner * **Reverse index corruption** — fix key reverse index cleanup on `KEY_REMOVE` * **Chain stats accuracy** — correct `state_entries` undercount and `total_accounts` source * **Missing metrics tracking** — add `track_request` calls to 17 gRPC endpoints * **Gossip replay protection** — reject already-committed messages in the gossip receiver * **State root snapshot** — use actual state root in shutdown snapshot instead of zero hash * **Network flag validation** — fail fast on invalid `--network` flag instead of silent devnet default * **Block hash verification** — verify block hash integrity before storing in `commit_block` * **DA reference logging** — log warning on malformed DA reference decode instead of silent skip * **Solana verification safety** — replace `unwrap()` with error handling in `verify_sol_claim` #### Refactoring * **Hex encoding optimization** — use pre-allocated buffer instead of per-byte `format!()` calls * **`as_str()` methods** — add `as_str()` to `MessageType` and other enums to eliminate `format!("{:?}")` allocations * **Reporter double-lock fix** — return block response from `commit_block` to avoid double-locking in reporter * **`lock_state()` helper** — extract shared state lock helper in gRPC service for consistent error handling * **Shared hex module** — consolidate duplicate hex encoding functions into **src/hex.rs** * **Block build simplification** — simplify `build_block` panic path in `commit_block` * **Debug derives** — add `Debug` derives to public consensus and API structs * **Mempool optimization** — eliminate message cloning in mempool drain and harden decode paths * **Reverse pubkey index** — O(1) account-by-key lookups via `0x0B` prefix index #### Documentation * **RPC reference** — comprehensive reference for all 32 gRPC methods with request/response schemas * **Protocol docs** — scope requirements, storage limits, execution phase corrections * **Design system pages** — brand, colors, typography, components, writing guide, shapes * **Key schema docs** — updated with new prefixes (`0x0A`, `0x0B`) and current test counts ## Contributing Development setup, test workflow, and contribution guidelines. ### Prerequisites | Tool | Version | Purpose | | ------ | ------------- | ------------------------------------ | | Rust | Nightly 1.93+ | Commonware requires nightly features | | protoc | 3.x+ | `tonic-build` codegen | | Bun | 1.x+ | Docs site (Vocs) | Install Rust nightly: ```bash rustup install nightly rustup default nightly ``` Install protoc: ```bash # macOS brew install protobuf # Ubuntu / Debian sudo apt install protobuf-compiler ``` *** ### Repository structure | Path | Description | | ------------------------- | ----------------------------------------------------------------------- | | **src/** | Rust library crate — state, consensus, API, validation, message modules | | **src/bin/node.rs** | Full validator node binary | | **src/bin/cli.rs** | CLI client for interacting with a node | | **proto/makechain.proto** | Protobuf service and message definitions | | **tests/** | Integration and unit test suites | | **docs/** | Vocs documentation site | | **protocol/** | Protocol specification (SPECIFICATION.md) | *** ### Build and test #### Build ```bash cargo build # Build library + node binary (runs tonic codegen via build.rs) cargo run --bin node # Start node: gRPC :50051, p2p :50052, Simplex consensus cargo run --bin cli # CLI client ``` #### Run tests ```bash cargo test # Run all 520 tests cargo test # Run a single test by name ``` #### Test distribution | File | Tests | Coverage | | ------------------------------ | ----- | --------------------------------------------------------------------------------------- | | **tests/state\_test.rs** | 107 | State transitions, authorization, 2P semantics, CAS, LWW, archive, fork, storage limits | | **tests/integration\_test.rs** | 100 | End-to-end gRPC submit, mempool, propose, commit, query, streaming, multi-account | | **tests/validation\_test.rs** | 76 | Structural validation for every message type | | **tests/api\_test.rs** | 53 | API query layer: get/list for all resources, pagination, verifications | | **tests/message\_test.rs** | 10 | Signing and verification round-trips | | Unit tests (inline) | 174 | State, consensus, and API module internals | *** ### Conventions #### TDD workflow Write tests first, then implement. #### Error assertions Use `matches!()` for asserting error variants in tests: ```rust let result = apply_message(&mut store, &msg); assert!(matches!(result, Err(StateError::ProjectNotFound(_)))); ``` #### Proto changes When adding new message types or RPCs: 1. Modify **proto/makechain.proto** 2. Run `cargo build` — Rust types appear automatically via `tonic-build` 3. Add structural validation in **src/validation.rs** 4. Add state handlers in **src/state/handlers/** 5. Add API query functions in **src/api/query.rs** if needed 6. Write tests covering the new type #### Code style * Rust edition 2024, nightly toolchain * All hash fields are 32 bytes, all signatures are 64 bytes * Proto enum variants use `SCREAMING_SNAKE_CASE` with a type prefix * State keys use prefix-byte namespacing (see **src/state/keys.rs**) *** ### Pull requests 1. Create a feature branch from `main` 2. Write tests for your changes 3. Run `cargo test` and ensure all 520 tests pass 4. Run `cargo build` to verify compilation 5. Open a pull request with a clear description of the change *** ### Docs contribution The documentation site uses [Vocs](https://vocs.dev) with Bun: ```bash cd docs bun install # Install dependencies (first time) bun run dev # Dev server at http://localhost:5173 bun run build # Build static site to docs/dist/ ``` Pages are MDX files in **docs/pages/**. Follow the [writing guide](/design/writing-guide) for conventions. Deployed to Cloudflare Pages via **docs/wrangler.toml**. ## FAQ Common questions about the protocol, identity, consensus, storage, and development. ### Protocol #### How is Makechain different from Git? Git is local. Makechain orders and stores git-like operations (project creation, commits, ref updates, access control) as signed messages on a BFT consensus chain. Every operation has cryptographic attribution and global ordering. #### What are 1P and 2P semantics? * **1P (one-phase)** — unilateral state changes with no paired undo. Includes `FORK`, `PROJECT_METADATA`, `ACCOUNT_DATA`, `COMMIT_BUNDLE`, and `PROJECT_ARCHIVE`. * **2P (two-phase)** — add/remove pairs on a set. Remove wins on tie. Used for projects, refs, collaborators, keys, and verifications. See the [state model](/protocol/state-model) for details. #### How does CAS work for refs? Ref updates include the expected current hash (`old_hash`) and a monotonic nonce. If the ref has moved, the update is rejected with `RefCasMismatch`. #### What happens when a message fails execution? Stages 1–5 of the [submit pipeline](/protocol/submit-pipeline) reject synchronously on submit. Stage 6 (block execution) runs asynchronously — failed messages are silently dropped and the block proceeds without them. #### What is a conflict key? The tuple identifying which state slot a message targets. For example, `(project_id, field)` for `PROJECT_METADATA`. Messages with the same conflict key are resolved by LWW or CAS depending on the type. *** ### Identity #### What is a Make ID? A `uint64` account identifier assigned by the onchain registry. Every message references a `mid` identifying the acting account. #### What are key scopes? | Scope | Permissions | | ------- | ------------------------------------------------------------------------- | | OWNER | Full control: manage keys, remove projects, manage collaborators | | SIGNING | Create projects, push commits, update refs, add verifications | | AGENT | Automated actions, restricted to specific projects via `allowed_projects` | Scopes are hierarchical — higher scopes inherit all lower-scope permissions. #### How does verification work? You sign `makechain:verify:` with your external key and submit a `VERIFICATION_ADD` message. Supported address types: * **ETH\_ADDRESS** — EIP-191 personal\_sign recovery * **SOL\_ADDRESS** — Ed25519 verification (address is the public key) See [identity](/protocol/identity) for details. *** ### Consensus #### How fast is finality? \~200ms block time, \~300ms finality (2-chain rule). #### Can I run a validator? Currently devnet with a single validator. Multi-validator support is implemented (`--bootstrapper` and multi-participant flags). Public validator participation will open on testnet. #### What consensus algorithm does Makechain use? [Simplex BFT](https://commonware.xyz) — single-chain, `3f + 1` fault tolerance, round-robin leader election, 2-chain finality. *** ### Storage #### What are the storage limits? Per storage unit: | Resource | Limit | | ------------------------- | ---------------------- | | Projects | 10 | | Commits per project | 10,000 (oldest pruned) | | Refs per project | 200 | | Collaborators per project | 50 | | Keys per account | 100 | | Verifications per account | 50 | See [storage limits](/protocol/storage-limits) for the full allocation model. #### What happens when I hit the commit limit? Oldest commits not referenced by any active ref are pruned. Ref-targeted commits are never pruned. File content in the DA layer is unaffected. *** ### Development #### What Rust version do I need? Nightly 1.93+. Commonware dependencies require nightly features. ```bash rustup install nightly rustup default nightly ``` #### Why is protoc required? `build.rs` uses `tonic-build` to compile **proto/makechain.proto** into Rust types and gRPC stubs. * macOS: `brew install protobuf` * Ubuntu: `apt install protobuf-compiler` #### How do I run tests? ```bash cargo test # All 520 tests cargo test test_fork # Filter by name ``` See the [contributing](/contributing) page for the test distribution breakdown. ## Getting Started Makechain is a Rust crate implementing the core protocol with a node binary and CLI client. This guide walks you through building, running a node, and submitting your first message. ### Prerequisites * **Rust nightly 1.93+** (edition 2024) * **protoc** (protobuf compiler) on your PATH ### Build ```bash git clone https://github.com/officialunofficial/makechain.git cd makechain cargo build ``` ### Run a Node Start a single-validator development node: ```bash cargo run --bin node -- --grpc-addr 127.0.0.1:50051 --p2p-addr 127.0.0.1:50052 ``` The node starts Simplex BFT consensus and a gRPC server with grpc-web support. #### Node Flags | Flag | Default | Description | | ---------------------- | --------------- | ------------------------------------------------------- | | `--grpc-addr` | `0.0.0.0:50051` | gRPC listen address | | `--p2p-addr` | `0.0.0.0:50052` | P2P listen address | | `--seed` | `0` | Validator key seed (deterministic derivation) | | `--data-dir` | `.makechain` | Data directory for snapshots | | `--snapshot-interval` | `100` | Save state snapshot every N blocks (0 = disabled) | | `--metrics-addr` | `0.0.0.0:9090` | Prometheus metrics endpoint | | `--network` | `devnet` | Network: devnet, testnet, or mainnet | | `--validators` | | Additional validator public keys (hex, comma-separated) | | `--bootstrappers` | | Bootstrap peers: `pubkey@host:port` (comma-separated) | | `--rate-limit-burst` | `100` | Max burst tokens per account (0 = disabled) | | `--rate-limit-rate` | `10.0` | Tokens replenished per second per account | | `--mempool-capacity` | `100000` | Maximum pending messages in mempool | | `--max-block-messages` | `10000` | Maximum messages per block | ### Use the CLI #### Generate a keypair ```bash cargo run --bin cli -- keygen # Output: # secret: # public: ``` #### Register your key on-chain ```bash cargo run --bin cli -- register-key --secret --mid 1 # Output: accepted: hash= ``` #### Create a project ```bash cargo run --bin cli -- create-project --secret --mid 1 --name "my-project" # Output: accepted: project_id= ``` #### Manage projects ```bash # Set project metadata cargo run --bin cli -- set-project-metadata --secret --mid 1 \ --project-id --description "updated desc" # Add a collaborator cargo run --bin cli -- add-collaborator --secret --mid 1 \ --project-id --collaborator-mid 2 # Fork a project cargo run --bin cli -- fork-project --secret --mid 1 \ --source-project-id --source-commit-hash --name "my-fork" ``` #### Query state ```bash # Get account info cargo run --bin cli -- get-account --mid 1 # Get project info cargo run --bin cli -- get-project --id # Look up project by name cargo run --bin cli -- get-project-by-name --mid 1 --name "my-project" # List all projects cargo run --bin cli -- list-projects # List refs, commits, collaborators cargo run --bin cli -- list-refs --project-id cargo run --bin cli -- list-commits --project-id cargo run --bin cli -- list-collaborators --project-id # List account keys cargo run --bin cli -- list-keys --mid 1 # Subscribe to live message stream cargo run --bin cli -- subscribe ``` ### Using as a Library You can also use makechain as a Rust library for building and verifying messages: ```rust use makechain::message::{build_message, verify_message}; use makechain::proto::{self, message_data::Body, MessageType, Network}; use ed25519_dalek::SigningKey; use rand::rngs::OsRng; // Generate a signing key let signing_key = SigningKey::generate(&mut OsRng); // Create a PROJECT_CREATE message let data = proto::MessageData { r#type: MessageType::ProjectCreate as i32, mid: 1, timestamp: 1000, network: Network::Devnet as i32, body: Some(Body::ProjectCreate(proto::ProjectCreateBody { name: "my-project".to_string(), visibility: proto::Visibility::Public as i32, description: "A new project".to_string(), license: "MIT".to_string(), })), }; // Sign and wrap the message let message = build_message(data, &signing_key).unwrap(); // The message hash IS the project_id (content-addressed) let project_id = &message.hash; // Verify the message assert!(verify_message(&message).is_ok()); ``` ### Run Tests ```bash cargo test # Run all 490 tests cargo test # Run a specific test ``` ### Next Steps * [Protocol Overview](/protocol/overview) — understand message semantics * [Message Types](/protocol/messages) — all available operations * [Architecture](/architecture) — system design and execution model * [API Reference](/api/overview) — gRPC endpoints ## Glossary Definitions for terms used throughout Makechain documentation. ### Protocol * **Message** — signed operation envelope: BLAKE3 hash (32 bytes), Ed25519 signature (64 bytes), signer public key, and payload. * **Message type** — the operation a message performs: `PROJECT_CREATE`, `COMMIT_BUNDLE`, `REF_UPDATE`, etc. Sixteen types grouped into [1P and 2P semantics](/protocol/state-model). * **1P (one-phase)** — unilateral state change with no paired undo. Includes singletons, LWW registers, append-only, and state transitions. * **2P (two-phase)** — add/remove pairs on a set. Remove wins on timestamp tie. * **CAS (compare-and-swap)** — optimistic locking via expected current value. Used for [ref updates](/protocol/messages). * **LWW (last-write-wins)** — most recent message by consensus order wins. Used for `PROJECT_METADATA` and `ACCOUNT_DATA`. * **Remove-wins** — on tie between add and remove in a 2P set, remove takes precedence. * **Conflict key** — the tuple identifying which state slot a message targets, for example `(project_id, ref_name)`. * **Project ID** — BLAKE3 hash of the `PROJECT_CREATE` message. Content-addressed and immutable. * **Envelope** — the outer `Message` struct wrapping `MessageData` with hash, signature, and signer fields. ### Identity * **Make ID (MID)** — `uint64` account identifier from the onchain registry. Every message references a `mid`. * **Scope** — key permission level: OWNER, SIGNING, or AGENT. Hierarchical. * **Key** — Ed25519 public key registered to a Make ID with a scope. Added via `KEY_ADD`, removed via `KEY_REMOVE`. * **Claim signature** — proof linking an external address (Ethereum or Solana) to a Make ID. Message: `makechain:verify:`. * **Verification** — result of `VERIFICATION_ADD`. Links an `ETH_ADDRESS` or `SOL_ADDRESS` to a Make ID. * **Owner address** — 20-byte EVM address on the account, set by the registry relay on first `KEY_ADD`. Updated via `OWNERSHIP_TRANSFER`. * **Ownership transfer** — relay-injected message updating `owner_address`. Includes `previous_owner_address` for defense-in-depth. * **Collaborator** — a Make ID granted write access to another account's project. ### Consensus * **Simplex BFT** — single-chain BFT consensus from [Commonware](https://commonware.xyz). `3f + 1` fault tolerance. * **Block** — batch of messages ordered by consensus. \~200ms block time. * **Finality** — 2-chain rule: final after two consecutive notarized blocks. \~300ms. * **Notarization** — 2/3+ validator vote to accept a proposed block. * **Mempool** — validated message queue. 100,000 capacity, deduplicates by hash. * **Leader** — validator proposing a block in a given round. Round-robin. ### Execution * **Account pre-pass** — Phase 1: serial execution of account-level messages (`KEY_ADD`, `KEY_REMOVE`, `OWNERSHIP_TRANSFER`, `ACCOUNT_DATA`, `VERIFICATION_ADD`/`REMOVE`, `PROJECT_CREATE`, `PROJECT_REMOVE`, `FORK`). * **Project execution** — Phase 2: project-scoped messages grouped by `project_id` and executed in [parallel](/protocol/sharding), each with its own overlay store. * **Overlay store** — copy-on-write store providing isolation between parallel project groups. * **Snapshot store** — read-only base state plus account pre-pass diffs. Base for overlay stores in Phase 2. * **State root** — BLAKE3 merkle root of sorted per-project roots. * **State diff** — writes and deletes produced by block execution. Applied atomically on commit. ### Storage * **Storage unit** — yearly capacity allocation. Default: 1 unit. Provides 10 projects, 10,000 commits/project, 200 refs, 50 collaborators. * **DA layer (data availability layer)** — separate storage for file content (blobs, trees). Referenced by `da_reference` in commit bundles. * **Pruning** — removal of oldest unprotected commit metadata when a project exceeds its limit. Ref-targeted commits are never pruned. * **Ref** — named pointer (branch or tag) to a commit hash. CAS-ordered updates, monotonic nonces. * **Fast-forward** — ref update where the new commit descends from the current target. * **Nonce** — monotonic counter on each ref preventing reordering. ### Infrastructure * **Commonware** — distributed systems primitives: consensus, p2p, parallel execution, storage, cryptography, codec. * **tonic** — Rust gRPC framework. Supports grpc-web. * **rayon** — data-parallelism library for Phase 2 project execution. * **QMDB** — merkleized persistent state backend from Commonware. Hybrid architecture: in-memory `BTreeMap` as runtime store with QMDB as write-behind durable layer. * **grpc-web** — HTTP/1.1-compatible gRPC for browser clients. * **protobuf** — serialization format. Types defined in **proto/makechain.proto**, compiled via `tonic-build`. * **Network** — chain namespace: `devnet`, `testnet`, or `mainnet`. Cross-network messages are rejected. ## Troubleshooting Common errors and their solutions. ### Build errors #### protoc not found ``` error: failed to run custom build command for `makechain` Could not find `protoc` installation ``` Install `protoc`: ```bash # macOS brew install protobuf # Ubuntu / Debian sudo apt install protobuf-compiler # Verify protoc --version ``` #### Rust nightly required ``` error[E0554]: `#![feature]` may not be used on the stable release channel ``` Nightly 1.93+ required: ```bash rustup install nightly rustup default nightly ``` #### Commonware version mismatch ``` error: failed to select a version for `commonware-consensus` ``` Ensure **Cargo.lock** is up to date: ```bash cargo update cargo build ``` *** ### Node startup #### Port already in use ``` Error: Address already in use (os error 48) ``` The default ports are 50051 (gRPC) and 50052 (p2p). Check for conflicting processes: ```bash lsof -i :50051 lsof -i :50052 ``` Kill the conflicting process or configure different ports via CLI flags. #### Invalid network flag ``` Error: invalid network "prod" — expected devnet, testnet, or mainnet ``` Valid values: `devnet`, `testnet`, `mainnet`. #### Validator key file errors ``` Error: failed to read validator key file ``` The file must contain a 64-character hex-encoded Ed25519 seed. For development, use `--seed` instead: ```bash cargo run --bin node -- --seed 1 ``` *** ### Message submission errors These errors are returned synchronously by `SubmitMessage` (pipeline stages 1–5). See the [submit pipeline](/protocol/submit-pipeline) for the full flow. #### Envelope verification (stage 1) | Error | Cause | Solution | | ----------------- | --------------------------- | --------------------------------------------------- | | Hash mismatch | `BLAKE3(data) != hash` | Recompute hash from serialized `MessageData` | | Invalid signature | Ed25519 verification failed | Ensure you sign the hash bytes with the correct key | #### Structural validation (stage 2) | Error | Cause | Solution | | --------------------------------------- | -------------------------------- | --------------------------------------------------- | | `missing message data` | `MessageData` is null | Populate the `data` field | | `mid must be non-zero` | `mid` field is 0 | Set `mid` to your Make ID | | `timestamp must be non-zero` | `timestamp` is 0 | Set timestamp to current epoch seconds | | `missing message body` | No `oneof body` variant set | Set the appropriate body for your message type | | `project name is empty` | Empty `name` in `PROJECT_CREATE` | Provide a non-empty project name | | `project name too long` | Name exceeds 100 characters | Shorten the project name | | `project_id must be 32 bytes` | Wrong-length project ID | Use the 32-byte BLAKE3 hash of the creation message | | `ref_update: missing ref_name` | Empty ref name | Provide a ref name (for example, `refs/heads/main`) | | `ref_update: new_hash must be 32 bytes` | Wrong-length commit hash | Use a 32-byte BLAKE3 hash | | `commit_bundle: empty commits list` | No commits in bundle | Include at least one commit | | `commit_bundle: too many commits` | More than 1,000 commits | Split into multiple bundles | #### Signer authorization (stage 3) | Error | Cause | Solution | | --------------------- | ----------------------------------------------------- | -------------------------------- | | Signer not registered | The signing key is not registered for the given `mid` | Submit a `KEY_ADD` message first | #### Network validation (stage 4) | Error | Cause | Solution | | ---------------- | --------------------------------------------- | -------------------------------------------------------- | | Network mismatch | Message `network` differs from node's network | Set the message network to match (for example, `devnet`) | #### Mempool admission (stage 5) | Error | Cause | Solution | | --------------------------- | ---------------------------------- | ----------------------------------------------------- | | Duplicate message | Message hash already in mempool | This message was already submitted — no action needed | | Mempool full | Mempool at capacity (100,000) | Wait and retry | | Timestamp too old | Message older than 10 minutes | Use a current timestamp | | Timestamp too far in future | Message more than 30 seconds ahead | Fix client clock skew | *** ### State errors (execution stage) These errors occur during block execution (stage 6) and cause the message to be silently dropped. They appear in node logs but are not returned to the submitter. | Error | Cause | Solution | | --------------------------- | -------------------------------------------------------- | --------------------------------------------------------------------------- | | `UnknownAccount` | `mid` has no registered account | Register a key first via `KEY_ADD` | | `UnknownKey` | Signing key not found for `mid` | Register the key with `KEY_ADD` | | `InsufficientScope` | Key scope too low for the operation | Use a key with the required scope (for example, OWNER for `PROJECT_REMOVE`) | | `NotProjectOwner` | Operation requires project ownership | Only the project creator can perform this action | | `NotProjectAdmin` | Operation requires admin access | The `mid` needs owner or collaborator access with admin role | | `NotProjectWriter` | Operation requires write access | Add the `mid` as a collaborator or use the owner's key | | `ForkAccessDenied` | Forking a private project without read access | Request collaborator access from the project owner | | `AgentProjectDenied` | AGENT key not authorized for this project | Add the project to the key's `allowed_projects` list | | `ProjectNotFound` | `project_id` does not exist | Verify the project ID (BLAKE3 hash of the creation message) | | `ProjectRemoved` | Project has been removed | The project was deleted via `PROJECT_REMOVE` | | `ProjectArchived` | Project is read-only | Archived projects do not accept writes. Create a new project or fork | | `StorageLimitExceeded` | Account hit the project limit | Each storage unit allows 10 projects | | `RefLimitExceeded` | Project hit the 200-ref limit | Delete unused refs before creating new ones | | `CollaboratorLimitExceeded` | Project hit the 50-collaborator limit | Remove inactive collaborators | | `KeyLimitExceeded` | Account hit the 100-key limit | Remove unused keys | | `VerificationLimitExceeded` | Account hit the 50-verification limit | Remove old verifications | | `ProjectNameTaken` | Another project by this account uses the same name | Choose a different name | | `InvalidClaimSignature` | Verification signature is invalid | Re-sign the claim message `makechain:verify:` with your external key | | `RefCasMismatch` | Ref has been updated since you last read it | Re-read the ref, update `old_hash`, and retry | | `RefNotFound` | Ref does not exist in the project | Check the ref name spelling | | `CommitNotFound` | Referenced commit hash not in the project | Push the commit bundle before updating the ref | | `NotFastForward` | New commit is not a descendant of the current ref target | Rebase or merge before pushing | | `RefNonceMismatch` | Ref nonce does not match expected value | Re-read the ref to get the current nonce | | `AlreadyExists` | Adding an entry that already exists in a 2P set | The resource (collaborator, key, verification) is already active | | `AlreadyRemoved` | Removing an entry that is already removed | No action needed — the resource is already gone | *** ### API errors gRPC status codes returned by query endpoints: | gRPC Status | `ApiError` Variant | Common triggers | | ------------------ | ------------------ | -------------------------------------------------------- | | `NOT_FOUND` | `NotFound` | Invalid project ID, unknown account, missing ref | | `INVALID_ARGUMENT` | `InvalidArgument` | Malformed hex string, invalid cursor, limit out of range | | `INTERNAL` | `Internal` | State lock failure, unexpected storage error | | `INTERNAL` | `State(...)` | State errors propagated from the execution layer | ## Consensus ### Engine Makechain uses **Simplex BFT** via the [commonware-consensus](https://commonware.xyz) primitive. A single consensus chain orders all messages with parallel per-project execution within each block. | Property | Value | | --------------- | ------------------------------------------------ | | Block time | \~200ms target | | Finality | \~300ms (2-chain rule) | | Fault tolerance | Byzantine fault tolerant up to 1/3 of validators | Validators are selected by a permissioned set initially, with a path to permissionless staking. ### Block Lifecycle 1. **Propose** — The round leader drains the mempool, executes messages in two phases (account pre-pass + parallel project execution), and produces a state root digest 2. **Verify** — Other validators re-execute the messages and verify the state root matches 3. **Notarize** — Validators vote to notarize the block (2/3 threshold) 4. **Finalize** — When two consecutive blocks are notarized, the first is finalized (2-chain rule) 5. **Commit** — State diffs are applied to the base store and committed messages are broadcast to subscribers ### Networking * **Transport:** `commonware-p2p::authenticated` — encrypted connections between peers identified by Ed25519 public keys * **Channels:** Three Simplex network channels — votes, certificates, and resolver (catch-up) * **Mempool:** Messages submitted to any validator are propagated to the leader's mempool * **Sync:** New nodes download periodic snapshots, then sync blocks from the snapshot height ### Configuration Key consensus parameters (configurable via `ConsensusConfig`): | Parameter | Default | Description | | -------------------------- | ------- | ------------------------------------------- | | `leader_timeout` | 200ms | Time to wait for a leader proposal | | `notarization_timeout` | 500ms | Time to wait for notarization | | `max_block_messages` | 10,000 | Maximum messages per block | | `max_project_messages` | 500 | Maximum messages per project per block | | `mempool_capacity` | 100,000 | Maximum pending messages | | `max_timestamp_age_secs` | 600 | Reject messages older than 10 minutes | | `max_timestamp_drift_secs` | 30 | Reject messages more than 30s in the future | ## Data Availability The consensus layer stores only message metadata (\~100–500 bytes per message). Actual file content — blobs, trees, and full commit messages — lives in a separate data availability (DA) layer. ### Architecture ``` Developer Makechain Consensus DA Layer │ │ │ ├─ Upload blobs ───────────────────────────────────► │ │ │ │ ├─ COMMIT_BUNDLE ──────────►│ │ │ (da_reference = hash) │ │ │ ├─ DA sampling ─────────►│ │ │ (confirm availability)│ │ │◄──────────────────────┘│ │ │ │ │ ├─ Include in block │ │ │ │ Consumer │ │ │ │ │ ├─ Read commit metadata ◄──┤ │ ├─ Fetch blobs ───────────────────────────────────► │ │ │ │ ``` ### DA Reference Each `COMMIT_BUNDLE` includes a `da_reference` — a 32-byte hash identifying the erasure-coded blob data in the DA layer. This hash is opaque to the consensus layer; its interpretation depends on the DA backend. The DA reference links consensus-layer metadata to the full data: | Consensus Layer (validators) | DA Layer (storage) | | ----------------------------------- | -------------------------------- | | Commit hash, title, author, parents | Full commit message text | | Tree root hash | Tree objects (directory listing) | | DA reference | Blob objects (file content) | ### Blob Lifecycle 1. **Upload**: Developer uploads tree and blob data to the DA layer, receiving a `da_reference` hash 2. **Submit**: Developer submits a `COMMIT_BUNDLE` with the `da_reference` and commit metadata 3. **Validate**: Validators confirm the data is available via DA sampling before including the bundle in a block 4. **Store**: Consensus stores only the metadata; the DA layer retains the full data 5. **Retrieve**: Consumers read metadata from consensus and fetch full data from the DA layer ### Recovery from Pruning When consensus-layer commit metadata is pruned (see [Storage Limits](/protocol/storage-limits)), the full commit data remains recoverable from the DA layer: * Pruned `CommitMeta` entries lose their hash, title, and parent links from validator state * The DA layer retains the complete blob data indefinitely (subject to DA-layer retention policies) * A node syncing from scratch can reconstruct pruned commit history by walking the DA layer This separation ensures that storage limits on validators don't cause permanent data loss. ### DA Sampling Validators use DA sampling to confirm that blob data is actually available before including a `COMMIT_BUNDLE` in a block. This prevents a scenario where a developer submits metadata referencing data that doesn't exist. The sampling mechanism integrates with the `CertifiableAutomaton` trait from commonware-consensus: ``` certify(digest) → bool ``` The default implementation returns `true` (no sampling). When DA sampling is enabled, `certify()` will check that all `da_reference` values in the proposed block are available in the DA layer before voting to finalize. ### DA Backend Options The DA backend is pluggable. Options under consideration: | Backend | Tradeoffs | | ----------------------------------- | ------------------------------------------------------------- | | **Commonware `coding`** | Erasure coding with availability sampling, native integration | | **Validator-operated storage** | Content-addressed blob store run by the validator set | | **External DA (Celestia, EigenDA)** | External trust assumption, may add latency | | **IPFS** | Decentralized, but no availability guarantees | The initial implementation uses a simple content-addressed blob store. The `da_reference` is the BLAKE3 hash of the uploaded data. ## Identity ### Ownership Hierarchy Every Make ID is owned by an EVM wallet address on Tempo: | Layer | Role | | ------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------- | | **EVM Wallet / Passkey** (`owner_address`) | Canonical account owner. Pays gas for registration, funds storage, manages keys. Supports EOAs, smart wallets, and WebAuthn passkeys. | | **Make ID** (`mid`) | On-chain account identifier (uint64). Assigned by the registry contract. Ownership is transferable. | | **Ed25519 Keys** | Delegated signing keys for fast off-chain operations (signing messages, pushing commits). | Registration costs gas on Tempo, providing natural spam resistance — no one can create unlimited accounts for free. MID ownership is transferable onchain for social recovery and account migration. A custodial recovery address can be authorized for safe transfer in case of key loss. ### Accounts An **account** is identified by a unique Make ID (`mid`, uint64) assigned by an onchain registry contract. Each account has an `owner_address` (20-byte EVM address) set on its first `KEY_ADD` message. Ownership transfer happens onchain via the registry contract's `transferOwnership()` function, which emits an event relayed into Makechain consensus as an `OWNERSHIP_TRANSFER` message. This updates `owner_address` in the state layer. The `OWNERSHIP_TRANSFER` message includes both the new and previous owner addresses — the previous address must match the current state as a defense-in-depth check. ### Keys All keys are **Ed25519**. Each account has one or more registered keys with explicit scopes: | Scope | Capabilities | | --------- | ----------------------------------------------------------------------- | | `OWNER` | Full account control: manage keys, transfer projects, delete account | | `SIGNING` | Push commits, update refs, manage collaborators on authorized projects | | `AGENT` | Automated actions (CI/CD, AI agents) — scoped to specific projects/refs | Keys are registered onchain and relayed into the consensus layer as `KEY_ADD` / `KEY_REMOVE` messages (2P set) so validators can verify signatures without querying the chain. The `KEY_ADD` message carries an `owner_address` field set by the registry relay on registration. ### Registry Contracts Four contracts on Tempo manage MID lifecycle. Validators watch the **MakeRegistry** for events and relay them into Makechain consensus. ``` MakeBundler → MakeIdGateway → MakeRegistry ← RecoveryRouter (atomic ops) (registration (core: MIDs, (social recovery policy) keys, transfer, via multisig) recovery) ``` #### MakeRegistry The core contract. Manages MID ownership, Ed25519 key delegation, ownership transfer, and time-locked recovery. | Function | Relay Message | Description | | ---------------------------------- | --------------------------- | ----------------------------------------------- | | `register(to, key, scope)` | `KEY_ADD` + `owner_address` | Create a new MID (gateway-only) | | `addKey(mid, key, scope, ...)` | `KEY_ADD` | Add a delegated signing key | | `removeKey(mid, key)` | `KEY_REMOVE` | Revoke a key (permanent — no re-add) | | `transferOwnership(mid, newOwner)` | `OWNERSHIP_TRANSFER` | Transfer MID ownership | | `setRecovery(mid, recovery)` | — | Set time-locked recovery address | | `initiateRecovery(mid, newOwner)` | — | Start recovery timelock (recovery address only) | | `cancelRecovery(mid)` | — | Cancel during timelock (owner only) | | `executeRecovery(mid)` | `OWNERSHIP_TRANSFER` | Complete recovery after timelock (anyone) | Key state machine: `NULL → ADDED → REMOVED` (irreversible — prevents key recycling attacks). Recovery address is preserved through direct transfers so recovery can still function if an attacker transfers the MID. #### MakeIdGateway Controls registration policy without changing the registry address validators watch. | Mode | Behavior | | ------------- | ---------------------------- | | `OPEN` | Anyone can register | | `INVITE_ONLY` | Requires a valid invite code | | `CLOSED` | Registration disabled | #### MakeBundler Atomic multi-step operations in a single transaction: register + add multiple keys + set recovery address. While Tempo's native batch calls (type `0x76`) can achieve this, the Bundler provides a typed API. #### RecoveryRouter A social recovery proxy owned by a multisig (e.g., Gnosis Safe 2/3). Users set the router's address as their recovery address. The multisig can then initiate time-locked recovery on their behalf if they lose access. Different user types can choose different recovery strategies: | User Type | Recovery Address | Controller | | ---------------------- | ------------------------ | --------------------------- | | Self-custody | Personal hardware wallet | User directly | | Hosted (Makechain app) | RecoveryRouter | 2/3 ops multisig | | Enterprise | Own RecoveryRouter | Organization's 3/5 multisig | #### Passkeys and Tempo Transactions Tempo Transactions (EIP-2718 type `0x76`) enable native support for: * **Passkey authentication (secp256r1/WebAuthn)** — Tempo validates P256 signatures at the chain level before EVM execution, so `msg.sender` is already authenticated when it reaches the registry contracts. No onchain P256 verification is needed. * **Batch calls** — deploy smart account + register MID + add keys in one atomic transaction * **Fee sponsorship** — Makechain can cover registration gas for new users This means passkey users follow the same code path as EOA users — Tempo handles the cryptographic difference transparently. The EIP-712 meta-transaction path (`registerFor`) uses secp256k1 ECDSA for offline/relayer signing scenarios. The contracts are in [`contracts/src/`](https://github.com/officialunofficial/makechain/tree/main/contracts/src). ### Signature Scheme * **Ed25519** — fast verification (\~60k verifications/sec on commodity hardware), compact signatures (64 bytes), deterministic signing (no nonce reuse risk) * **BLAKE3** — 32-byte digests for message hashing, commit hashing, and merkle tree construction ### External Address Verification Accounts can prove ownership of external blockchain addresses via `VERIFICATION_ADD` / `VERIFICATION_REMOVE` messages (2P set). Each verification requires a `claim_signature` proving the external key signed a deterministic challenge message. #### Challenge Message The message to sign is: ``` makechain:verify: ``` Where `` is the decimal string representation of the account's Make ID. For example, account `42` signs the UTF-8 bytes of `makechain:verify:42`. #### Ethereum (ETH\_ADDRESS) Sign the challenge using [EIP-191 personal\_sign](https://eips.ethereum.org/EIPS/eip-191): ``` keccak256("\x19Ethereum Signed Message:\n" + len(message) + message) ``` The `claim_signature` is 65 bytes: `r (32) || s (32) || v (1)` where `v` is the recovery ID (0 or 1). The `address` field is the 20-byte Ethereum address. The protocol recovers the public key from the signature, derives the address via `keccak256(pubkey)[12..]`, and verifies it matches. #### Solana (SOL\_ADDRESS) Sign the challenge using standard Ed25519: ``` ed25519_sign(keypair, "makechain:verify:") ``` The `claim_signature` is 64 bytes (standard Ed25519 signature). The `address` field is the 32-byte Solana public key. The protocol verifies the signature directly against the address. ## Message Types All message types and their semantics. ### 2P: Project Set | Type | Description | Required Scope | | ---------------- | ----------------------------------------------------- | -------------- | | `PROJECT_CREATE` | Create a new project with name and visibility | SIGNING | | `PROJECT_REMOVE` | Remove a project (hides refs, commits, collaborators) | OWNER | Conflict key: `(project_id)`. A removed project retains its data — a subsequent `PROJECT_CREATE` referencing the same project ID restores it. ### 1P: Singleton | Type | Description | Required Scope | | ------ | --------------------------------------------- | -------------- | | `FORK` | Fork an existing project at a specific commit | SIGNING | Includes `source_commit_hash` anchoring the fork to a precise point. The forked project's ID is the BLAKE3 hash of the FORK message. ### 1P: LWW Register | Type | Conflict Key | Required Scope | | ------------------ | --------------------- | -------------- | | `PROJECT_METADATA` | `(project_id, field)` | SIGNING | | `ACCOUNT_DATA` | `(mid, field)` | SIGNING | ### 1P: Append-only | Type | Description | Required Scope | | --------------- | ----------------------------------------------------- | -------------- | | `COMMIT_BUNDLE` | Declare a batch of new commit metadata + DA reference | AGENT | Commits are ordered parent-first within a bundle. Each commit includes: hash, parent hashes, tree root hash, author MID, title, and message hash. ### 1P: State Transition | Type | Description | Required Scope | | ----------------- | ---------------------- | -------------- | | `PROJECT_ARCHIVE` | Make project read-only | OWNER | ### 2P: Ref Set (CAS-ordered) | Type | Description | Required Scope | | ------------ | ------------------------------- | -------------- | | `REF_UPDATE` | Move a ref to a new commit hash | AGENT | | `REF_DELETE` | Remove a ref | AGENT | `REF_UPDATE` uses compare-and-swap: includes expected current hash (`old_hash`). If the ref has moved, the update is rejected. Updates must be fast-forward (the new commit must be a descendant of the current ref target) unless `force = true`. ### 2P: Collaborator Set | Type | Description | Required Scope | | --------------------- | ------------------------------------ | --------------- | | `COLLABORATOR_ADD` | Grant an account access to a project | SIGNING (admin) | | `COLLABORATOR_REMOVE` | Revoke access | SIGNING (admin) | Permissions: `READ`, `WRITE`, `ADMIN`, `OWNER`. ### 1P: Relay-Injected | Type | Description | Authorization | | -------------------- | ---------------------------------------------- | -------------- | | `KEY_ADD` | Register an Ed25519 key with a scope | Relay-injected | | `KEY_REMOVE` | Revoke a key | Relay-injected | | `OWNERSHIP_TRANSFER` | Transfer MID ownership to a new wallet address | Relay-injected | These messages are injected by validators relaying events from the onchain [MakeRegistry](/protocol/identity#registry-contract) contract. No Ed25519 scope check is performed — the onchain transaction was already validated. `OWNERSHIP_TRANSFER` includes `previous_owner_address` for defense-in-depth (must match current state). ### 2P: Verification Set | Type | Description | Required Scope | | --------------------- | -------------------------------------- | -------------- | | `VERIFICATION_ADD` | Prove ownership of an external address | SIGNING | | `VERIFICATION_REMOVE` | Revoke a verification | SIGNING | Supported types: `ETH_ADDRESS` (Ethereum EOA), `SOL_ADDRESS` (Solana). The `claim_signature` must be a valid signature over the challenge message `makechain:verify:`. See [Identity](/protocol/identity#external-address-verification) for signing details. ## Protocol Overview Makechain is a realtime decentralized protocol for ordering and storing git-like messages — project creation, commits, ref updates, access control — with permissionless publishing and cryptographic attribution. ### Design Goals 1. **High throughput** — 10,000+ messages per second with sub-second finality 2. **Permissionless publishing** — anyone can create projects and push code 3. **Self-authenticating messages** — every message verifiable without external lookups 4. **Thin consensus** — consensus orders metadata and ref pointers; file blobs live in a separate DA layer ### Message Envelope Every message on the network is wrapped in a self-authenticating envelope: ``` Message { data: MessageData // The operation hash: bytes(32) // BLAKE3(data) signature: bytes(64) // Ed25519 signature over hash signer: bytes(32) // Ed25519 public key } ``` Verification: check that `signer` is a registered key for `data.mid` with sufficient scope for the message type. ### Message Semantics Every message type follows one of two paradigms: #### 1P (One-Phase) The message creates or updates state unilaterally. No paired "undo" message exists. | Sub-type | Behavior | Examples | | -------------------- | ------------------------------------ | ---------------------------------- | | **Singleton** | Creates a new resource, irreversible | `FORK` | | **LWW Register** | Last-write-wins per conflict key | `PROJECT_METADATA`, `ACCOUNT_DATA` | | **Append-only** | Adds entries to a growing set | `COMMIT_BUNDLE` | | **State transition** | Moves resource to terminal state | `PROJECT_ARCHIVE` | #### 2P (Two-Phase) Add and Remove pairs operating on a set. On a timestamp tie, **remove wins**. | Sub-type | Behavior | Examples | | --------------- | ------------------------------------ | --------------------------------------------- | | **Set** | Standard add/remove with remove-wins | Project, Collaborator, Key, Verification sets | | **CAS-ordered** | Compare-and-swap for sequencing | `REF_UPDATE` / `REF_DELETE` | ### Content-Addressed IDs Project IDs are content-addressed — the `project_id` is the BLAKE3 hash of the `PROJECT_CREATE` message itself (i.e., `Message.hash`). Forked project IDs are the hash of the `FORK` message. This means two projects with the same name get different IDs because the hash includes MID, timestamp, etc. ## Security Overview of cryptographic primitives, authorization, consensus guarantees, replay protection, rate limiting, and P2P security. ### Cryptographic primitives | Primitive | Algorithm | Size | Usage | | ----------------- | ------------------------------- | -------- | -------------------------------------- | | Message hash | BLAKE3 | 32 bytes | `BLAKE3(MessageData)` | | Message signature | Ed25519 | 64 bytes | `Ed25519.sign(hash, key)` | | Signer key | Ed25519 | 32 bytes | Public key identifying the signer | | State root | BLAKE3 | 32 bytes | Merkle root of all state after a block | | ETH claim | EIP-191 + secp256k1 + keccak256 | 65 bytes | Ethereum address linking | | SOL claim | Ed25519 | 64 bytes | Solana address linking | Claim message format: `makechain:verify:`. *** ### Authorization model #### Key scopes | Scope | Level | Permissions | | ------- | ------- | --------------------------------------------------------------------------- | | OWNER | Highest | Manage keys, remove projects, manage collaborators, all lower-scope actions | | SIGNING | Middle | Create projects, push commits, update refs, add verifications, fork | | AGENT | Lowest | Restricted to specific projects via `allowed_projects` | #### Check order 1. **Key lookup** — signing key registered for the message's `mid` 2. **Scope check** — key scope meets minimum for the message type 3. **Project access** — `mid` is owner or collaborator 4. **Agent restriction** — target project in key's `allowed_projects` `KEY_ADD` and `KEY_REMOVE` skip the signer pre-check (relayed from onchain registry). #### Collaborators Collaborators get write access. Only the owner can add/remove them. The owner cannot be a collaborator on their own project. *** ### Consensus security Simplex BFT properties: * **Fault tolerance** — `3f + 1` validators, tolerates `f` Byzantine * **Finality** — 2-chain rule, \~300ms * **Leader election** — round-robin Committed blocks include a BLAKE3 hash verified before storage as a defense-in-depth check. *** ### Replay protection #### Timestamp windows | Parameter | Default | Effect | | -------------------------- | ------------ | ---------------------- | | `max_timestamp_age_secs` | 600 (10 min) | Reject old messages | | `max_timestamp_drift_secs` | 30 | Reject future messages | #### Hash deduplication Mempool deduplicates by hash. A committed message index lets gossip receivers reject already-finalized messages. #### Network isolation Messages include a `network` field. The node rejects messages for a different network, preventing cross-network replay. #### Ref nonces Monotonically increasing nonce on each ref prevents reordering even when CAS hashes match. *** ### Rate limiting Token-bucket on the gRPC API: burst 100, refill 10/sec. Exceeding returns `RESOURCE_EXHAUSTED`. *** ### P2P security * **Authenticated encryption** — all peer connections via `commonware-p2p`. Peers identified by Ed25519 keys. * **Gossip validation** — inbound messages pass full envelope verification and are checked against the committed message index. * **Channel quotas** — three Simplex channels (votes, certificates, resolver) with independent quotas. * **Misbehavior blocking** — strike-based system. Peers sending invalid messages accumulate strikes and get blocked. *** ### Storage limits | Resource | Limit | | ------------------------- | ---------------------------------- | | Projects per storage unit | 10 | | Commits per project | 10,000 (oldest unprotected pruned) | | Refs per project | 200 | | Collaborators per project | 50 | | Keys per account | 100 | | Verifications per account | 50 | See [storage limits](/protocol/storage-limits) for the full allocation model. ## Parallel Execution Makechain uses a single consensus chain with parallel per-project execution within each block, rather than separate shard chains. ### Execution Model Within each block, messages are processed in two phases: #### Phase 1: Account Pre-pass (Serial) Account-level messages are applied serially because they modify shared account state (key registrations, project counts): * `KEY_ADD` / `KEY_REMOVE` * `ACCOUNT_DATA` * `VERIFICATION_ADD` / `VERIFICATION_REMOVE` * `PROJECT_CREATE` / `PROJECT_REMOVE` / `FORK` (modify account `project_count`) #### Phase 2: Project Execution (Parallel) Project-scoped messages are grouped by `project_id` and each group is executed in parallel using rayon: * `PROJECT_ARCHIVE` / `PROJECT_METADATA` * `REF_UPDATE` / `REF_DELETE` * `COMMIT_BUNDLE` * `COLLABORATOR_ADD` / `COLLABORATOR_REMOVE` Each project group operates on its own copy-on-write overlay store (`OverlayStore`) that can read the base state plus any account changes from Phase 1 (via `SnapshotStore`), ensuring isolation between projects. ### State Root Computation After execution, state diffs from all projects are combined into a global state root: 1. Each project's diffs produce a per-project merkle root (BLAKE3 of sorted key-value pairs) 2. All project roots are sorted and combined into a global root This is deterministic regardless of parallel execution order. ### Future: Sharding The protocol spec reserves the possibility of sharding by `project_id` for horizontal scaling: ```rust shard_index = project_id[0..4] as u32 % num_shards ``` The current parallel execution model is a stepping stone — the per-project isolation already provides the separation needed for future sharding without cross-shard coordination (except for `FORK`, which includes a state proof from the source project). ## State Model ### Projects A project's state consists of: * **Metadata** — name, description, visibility, license * **Refs** — map of ref names to commit hashes * **Known commits** — set of registered commit hashes + metadata * **Collaborators** — map of MIDs to permission levels * **Owner** — Make ID of the project owner ### Accounts An account's state consists of: * **Owner address** — 20-byte EVM address anchoring the account to an onchain wallet. Set on first `KEY_ADD`, updated via `OWNERSHIP_TRANSFER`. * **Registered keys** — set of public keys with scopes * **Account metadata** — username, avatar, bio, website * **Verified addresses** — set of external addresses with claim proofs * **Storage units** — capacity allocation ### Merkle State Project state is authenticated via per-project merkle roots: ``` Global State Root ├── Project A Root (BLAKE3 of sorted key-value diffs) ├── Project B Root ├── Project C Root └── ... ``` Each project has a `project_root` (BLAKE3 hash of its sorted key-value state diffs). The global state root combines all per-project roots in sorted order, producing a deterministic root regardless of parallel execution order. ### Storage Limits Per storage unit (yearly): | Resource | Limit | | --------------------------- | ------ | | Projects | 10 | | Commit metadata per project | 10,000 | | Refs per project | 200 | | Collaborators per project | 50 | | Keys per account | 50 | | Verifications per account | 50 | | DA storage | 1 GB | ### Pruning When a project exceeds its commit metadata limit, the oldest entries are pruned from consensus state: * Head commits for every branch/tag are always retained * Intermediate commits on active branches are retained up to the limit * Commits on deleted branches with no remaining ref are pruned first * Full commit history remains recoverable from the DA layer ## Storage Limits Makechain enforces per-account storage limits to prevent unbounded state growth. Each account has **storage units** (default: 1 for free tier), and limits scale with units. ### Per Storage Unit (Yearly) | Resource | Limit | | --------------------------- | ------ | | Projects | 10 | | Commit metadata per project | 10,000 | | Refs per project | 200 | | Collaborators per project | 50 | | Keys per account | 50 | | Verifications per account | 50 | | DA storage | 1 GB | ### Enforcement Limits are enforced at state transition time: * **Projects:** `PROJECT_CREATE` and `FORK` check `project_count < storage_units * 10` before incrementing. `PROJECT_REMOVE` decrements the count, freeing capacity for new projects. * **Refs:** `REF_UPDATE` checks `ref_count < 200` when creating a new ref. Updating an existing ref doesn't change the count. `REF_DELETE` decrements. * **Collaborators:** `COLLABORATOR_ADD` checks `collaborator_count < 50` when adding a new collaborator. Permission updates don't change the count. * **Commits:** `COMMIT_BUNDLE` always appends commits, then triggers auto-pruning if the count exceeds 10,000. * **Keys:** `KEY_ADD` checks `key_count < 50` when adding a new key. `KEY_REMOVE` decrements. * **Verifications:** `VERIFICATION_ADD` checks `verification_count < 50` when adding a new verification. `VERIFICATION_REMOVE` decrements. ### Commit Pruning When a project exceeds its commit metadata limit, the oldest unprotected commits are pruned from consensus state: **A commit referenced by any active ref is never pruned.** The ref's head commit and its entire parent chain (reachable via parent links) are protected. #### Pruning Algorithm 1. Build the **protected set**: BFS from all active ref heads through parent links 2. Enumerate all commits; collect those not in the protected set 3. Sort unprotected commits by `indexed_at` ascending (oldest first) 4. Delete oldest unprotected commits until at or below the limit #### What This Means in Practice * Head commits for every branch/tag are always retained * Intermediate commits on active branches are retained * Commits on deleted branches with no remaining ref are pruned first * Full commit history remains recoverable from the DA layer — pruning only removes `CommitMeta` from validator state ### Error Types | Error | Trigger | | --------------------------- | ------------------------- | | `StorageLimitExceeded` | Project count at capacity | | `RefLimitExceeded` | Ref count at 200 | | `CollaboratorLimitExceeded` | Collaborator count at 50 | | `KeyLimitExceeded` | Key count at 50 | | `VerificationLimitExceeded` | Verification count at 50 | ## Message Submit Pipeline Every message goes through a multi-stage validation pipeline before being included in a block. ### Pipeline stages Submit pipeline: verify, validate, authorize, mempool, propose, execute, commit with rejection arrows at each synchronous stage ``` Client → gRPC SubmitMessage │ ├─ 1. Verify hash + signature │ BLAKE3(data) == hash │ Ed25519.verify(signature, hash, signer) │ ├─ 2. Structural validation │ Field sizes, non-empty constraints, enum validity │ (no state lookups) │ ├─ 3. Signer authorization pre-check │ Verify signer is a registered key for data.mid │ (skipped for KEY_ADD/KEY_REMOVE/OWNERSHIP_TRANSFER — relay-injected) │ ├─ 4. Network validation │ Message network matches the node's configured network │ ├─ 5. Mempool admission │ Deduplication (by message hash) │ Capacity check (default: 100,000) │ Timestamp window (10 min past, 30 sec future) │ ├─ [Mempool] ──── Consensus proposes block ──── │ ├─ 6. Block execution │ Serial account pre-pass (KEY_ADD, KEY_REMOVE, OWNERSHIP_TRANSFER, │ ACCOUNT_DATA, VERIFICATION_ADD/REMOVE, PROJECT_CREATE, PROJECT_REMOVE, FORK) │ Parallel project execution (grouped by project_id) │ Full state validation (authorization, CAS checks, etc.) │ └─ 7. Finalization State diffs applied to base store Block built and stored Messages broadcast to subscribers ``` ### Rejection Points Messages can be rejected at any stage. Each stage returns a specific error: | Stage | Example Errors | | --------------- | -------------------------------------------------------- | | 1. Verification | Hash mismatch, invalid signature | | 2. Structural | Missing body, invalid field length | | 3. Pre-check | Signer not registered for MID | | 4. Network | Wrong network (e.g., testnet message to devnet node) | | 5. Mempool | Duplicate message, mempool full, timestamp out of window | | 6. Execution | Unauthorized, CAS mismatch, project not found | Stages 1-5 happen synchronously on the submit RPC. Stage 6 happens asynchronously during block execution — if a message fails execution, it's silently dropped (the block proceeds without it). ### Subscriber Notifications Messages are broadcast to `SubscribeMessages` subscribers **only after consensus finalization** (stage 7), not on submit. This ensures subscribers see the canonical committed order and never see messages that fail execution. ### Timestamp Validation The mempool enforces a timestamp window: * **Maximum age:** 10 minutes in the past (configurable via `max_timestamp_age_secs`) * **Maximum drift:** 30 seconds in the future (configurable via `max_timestamp_drift_secs`) This prevents replay of old messages and rejects messages with clock skew beyond the tolerance window. ## Brand ### Logo The Makechain wordmark uses Inter SemiBold at tight letter-spacing, with the five brand shapes arranged below. #### Dark background
#### Light background
#### Clear space Maintain at least 1x the height of the shapes row as clear space around the logo.
### Brand Shapes The five primary brand shapes appear in the logo and serve as visual anchors throughout documentation.
Square #00EEBE
Circle #7A3BF7
Triangle #FA7CFA
Star #FAD030
Heart #FE0302
### Usage in Headings Shapes are placed inline to the left of section headings using `` in MDX: ```mdx ## Section Title ``` Cycle through the five brand shapes per page. Assign shapes consistently within a page but vary across pages. ### Principles
Monochrome base
Black and white only. No grays in primary surfaces. The absence of color makes the shapes hit harder.
Vibrant accents
Color only comes from the shapes. Every accent is saturated and distinct — no pastels, no gradients.
Geometric precision
Clean edges, integer coordinates, no anti-aliasing artifacts. Shapes are math, not illustration.
Tight spacing
Dense information, minimal whitespace. Every pixel earns its place. Content over chrome.
## Colors ### Theme The base theme is pure monochrome. Background and text use black/white with graduated neutral layers for depth. #### Backgrounds
Background #000000
Background Dark #0a0a0a
Background 2 #111111
Background 3 #191919
Background 4 #1e1e1e
Background 5 #252525
#### Text
Text #ffffff
Text 2 #cccccc
Text 3 #999999
Text 4 #666666
#### Borders
Border #252525
Border 2 #404040
*** ### Accent Palette All color in the system comes from the shape accents. No color is used for text, backgrounds, or UI chrome — only for these geometric marks. #### Brand (primary 5)
#00EEBE
#7A3BF7
#FA7CFA
#FAD030
#FE0302
#### Extended
#FF6B35
#FF3366
#EC4899
#F59E0B
#84CC16
#22C55E
#14B8A6
#06B6D4
#0096FF
#3B82F6
#6366F1
#8B5CF6
#A855F7
*** ### Contrast All accent colors are tested against the `#000000` background. | Color | Hex | Ratio | WCAG AA | | ------------------------------------------------------------------------------------------------------------------------------------------------------ | --------- | ------ | ------------ | | Green | `#00EEBE` | 12.8:1 | Pass | | Purple | `#7A3BF7` | 4.0:1 | Pass (large) | | Pink | `#FA7CFA` | 8.0:1 | Pass | | Yellow | `#FAD030` | 11.4:1 | Pass | | Red | `#FE0302` | 4.6:1 | Pass (large) | | Orange | `#FF6B35` | 6.5:1 | Pass | | Blue | `#3B82F6` | 5.3:1 | Pass | | Cyan | `#06B6D4` | 8.1:1 | Pass | | Emerald | `#22C55E` | 8.3:1 | Pass | *** ### Light Mode The system inverts cleanly. All theme tokens have light-mode counterparts: | Token | Dark | Light | | ------------ | --------- | --------- | | Background | `#000000` | `#ffffff` | | Background 2 | `#111111` | `#f5f5f5` | | Background 3 | `#191919` | `#eeeeee` | | Text | `#ffffff` | `#000000` | | Text 2 | `#cccccc` | `#333333` | | Text 3 | `#999999` | `#666666` | | Border | `#252525` | `#e0e0e0` | | Border 2 | `#404040` | `#cccccc` | Accent colors are identical in both modes — they're vivid enough to work on black or white. ## Components Patterns for composing content elements across the docs. ### Section Headings Every H2 gets a shape prefix. The shape is an inline `` at 14px, vertically centered.
Getting Started
Key Features
Architecture
Configuration
Community
*** ### Feature Cards Grid of cards with shape accent, title, and description. Used for overviews and principle lists.
Fast Finality
Sub-second block finality via Simplex BFT. No waiting for confirmations.
Cryptographic Auth
Every message is self-authenticating with Ed25519 signatures and BLAKE3 hashes.
Parallel Execution
Projects execute in parallel within each block via rayon thread pool.
*** ### Stat Blocks Horizontal row of key metrics. Shape serves as a bullet marker.
\~200ms
Block time
\~300ms
Finality
10k+
Messages/sec
32 bytes
Content-addressed IDs
*** ### Status Row Inline shapes as status indicators.
Consensus — operational
gRPC API — operational
DA Layer — syncing
Sharding — planned
*** ### Code Blocks Fenced code blocks use the `#111111` background with monospace font. ```rust // Content-addressed project ID let project_id = blake3::hash(&message_bytes); ``` ```bash cargo run --bin node -- --port 50051 --p2p-port 50052 ``` ``` Global State Root ├── Project A Root (BLAKE3 of sorted key-value diffs) ├── Project B Root └── ... ``` *** ### Tables Standard markdown tables for structured data. Borders and backgrounds come from theme tokens. | Message Type | Phase | Scope | | ------------------ | -------- | ------- | | `PROJECT_CREATE` | Serial | SIGNING | | `KEY_ADD` | Serial | OWNER | | `COMMIT_BUNDLE` | Parallel | AGENT | | `REF_UPDATE` | Parallel | AGENT | | `COLLABORATOR_ADD` | Parallel | SIGNING | *** ### Lists with Shapes Use shapes as custom bullet markers for feature lists.
Permissionless — anyone can create projects and push code without gatekeepers
Content-addressed — project IDs are BLAKE3 hashes of creation messages
CRDT semantics — deterministic conflict resolution with LWW, remove-wins, and CAS
Merkle-authenticated — every state entry is provable via per-project roots
*** ### Callout Boxes Bordered containers for important information, keyed by shape.
Note
The consensus layer stores only message metadata (\~100-500 bytes). File content lives in a separate DA layer.
Important
REF\_UPDATE uses compare-and-swap. If the ref has moved since your read, the update is rejected.
Tip
Use cargo test test\_name to run a single test by name for fast iteration.
*** ### Architecture Diagrams ASCII diagrams in fenced code blocks, referenced by surrounding shapes. ``` ┌─────────────────────────┐ │ Clients │ grpc-web / gRPC │ (Browser, CLI, SDK) │ └───────────┬─────────────┘ │ ┌───────────▼─────────────┐ │ Validator Node │ │ ┌────────┐ ┌─────────┐ │ │ │ gRPC │→│ Mempool │ │ │ └────────┘ └────┬────┘ │ │ ┌───▼────┐ │ │ │Simplex │ │ │ │BFT │ │ │ └───┬────┘ │ │ ┌───▼────┐ │ │ │ State │ │ │ └────────┘ │ └──────────────────────────┘ ``` *** ### Shape Pairing Guide When writing docs pages, assign shapes to H2s consistently within a page. The recommended cycle: | Position | Shape | Color | Typical meaning | | -------- | ---------------------------------------------------------------------------------------------------------- | --------- | ----------------------- | | 1st H2 | square | `#00EEBE` | Primary / main concept | | 2nd H2 | circle | `#7A3BF7` | Secondary / supporting | | 3rd H2 | triangle | `#FA7CFA` | Technical detail | | 4th H2 | star | `#FAD030` | Configuration / options | | 5th H2 | heart | `#FE0302` | Community / coda | For pages with more than 5 sections, pull from the extended shape set: diamond, hexagon, bolt, shield, sparkle, leaf, flame, etc. ## Shapes 55 vector shapes for use as visual anchors throughout the docs. Use inline in MDX headings: ```mdx ## Section Title ``` *** ### Geometric
square
rounded-square
circle
oval
triangle
caret
diamond
pentagon
hexagon
heptagon
octagon
capsule
semicircle
parallelogram
trapezoid
### Symbols
star
sparkle
starburst
heart
cross
x-mark
asterisk
bolt
shield
flag
target
eye
infinity
hourglass
ribbon
### Nature
sun
moon
crescent
leaf
flower
droplet
flame
wave
### 3D
cube
pyramid
### Outlines & Rings
ring
donut
spiral
arc
### Directional
arrow-right
arrow-up
chevron-right
chevron-down
### Decorative
grid
dots
dash
slash
zigzag
stripe
bracket
### Colors | Swatch | Hex | Used by | | ---------------------------------------------------------------------------------------------------------------------------------------------- | --------- | ---------------------------------------------- | | | `#00EEBE` | square, stripe | | | `#7A3BF7` | circle | | | `#FA7CFA` | triangle, flower | | | `#FAD030` | star, sparkle, bolt, flame, sun | | | `#FE0302` | heart, x-mark, target, flag | | | `#FF6B35` | diamond, semicircle, starburst, flame, pyramid | | | `#0096FF` | pentagon, parallelogram, rounded-square, wave | | | `#06B6D4` | hexagon, eye, slash | | | `#14B8A6` | octagon, cube, bracket | | | `#3B82F6` | cross, droplet, grid | | | `#84CC16` | arrow-right, heptagon, dash | | | `#F59E0B` | arrow-up, crescent, hourglass | | | `#EC4899` | trapezoid, spiral, ribbon | | | `#8B5CF6` | chevron-down, moon | | | `#FF3366` | ring, asterisk, zigzag, caret | | | `#6366F1` | donut, shield | | | `#A855F7` | oval, infinity, arc | | | `#22C55E` | capsule, leaf | | | `#EC4899` | chevron-right, flower | ## Typography ### Typeface The system uses the default Vocs font stack — system sans-serif for body text and monospace for code.
BODY
Inter, -apple-system, BlinkMacSystemFont, "Segoe UI", Helvetica, Arial, sans-serif
CODE
ui-monospace, SFMono-Regular, "SF Mono", Menlo, Consolas, monospace
*** ### Scale
H1
Page Title
H2
Section Heading
H3
Subsection Heading
BODY
Every operation is a cryptographically signed, self-authenticating message — verifiable without external lookups. Messages are ordered by Simplex BFT consensus with sub-second finality.
SMALL / CAPTION
Supplementary text for labels, metadata, and annotations.
CODE
shard\_index = project\_id\[0..4] as u32 % num\_shards
*** ### Hierarchy Rules 1. **One H1 per page** — the page title. No shape prefix. 2. **H2 with shape** — major sections. Every H2 gets a shape to its left at `width="14"`. 3. **H3 plain** — subsections within an H2. No shape, no decoration. 4. **Body at 0.85 opacity** — slightly softened white for comfortable reading on black. 5. **Code in monospace** — inline `code` and fenced blocks use the monospace stack on `#111111` background. *** ### Inline Code Use backtick-wrapped `inline code` for: * Field names: `project_id`, `old_hash`, `da_reference` * Message types: `COMMIT_BUNDLE`, `REF_UPDATE` * Hex values: `0x01`, `0x0A` * CLI commands: `cargo test`, `bun run build` *** ### Tables Tables use the default Vocs styling — borders from the theme, alternating row contrast via background layers. | Weight | Usage | Example | | ------ | --------------- | -------------------------- | | 700 | H1 page title | `font-weight: bold` | | 600 | H2, H3 headings | `font-weight: semibold` | | 400 | Body text | `font-weight: normal` | | 400 | Code blocks | Monospace at normal weight | ## Writing Guide The Makechain documentation is the canonical reference for the protocol, its APIs, and tooling. This guide provides editorial standards for writing clear, consistent, and accurate documentation. This page covers: * [Writing general documentation](#general-documentation) * [Writing protocol documentation](#protocol-documentation) * [Writing API documentation](#api-documentation) *** ### General Documentation #### Voice and tone Write in a technical, direct voice. Assume the reader is a developer who understands cryptography, distributed systems, and version control. Do not over-explain fundamentals — link to external references when background is needed. **Be precise, not verbose.** Every sentence should convey information. Cut filler words, hedging phrases ("it should be noted that"), and unnecessary qualifiers. * Correct: "Messages are ordered by Simplex BFT consensus with sub-second finality." * Incorrect: "It's worth noting that messages are typically ordered by what we call Simplex BFT consensus, which generally provides sub-second finality." #### Second person Write in the second person. Use "you" when addressing the reader directly. * Correct: "You submit messages via the gRPC `SubmitMessage` endpoint." * Incorrect: "We submit messages via the gRPC `SubmitMessage` endpoint." Reserve "we" for statements where the Makechain team is the explicit subject: "We plan to add P-256/WebAuthn as a secondary signature scheme." #### Present tense Use present tense to describe how the system works. Use future tense only for features that do not exist yet. * Correct: "The execution engine processes messages in two phases." * Incorrect: "The execution engine will process messages in two phases." #### Active voice Use active voice. Passive voice obscures the subject and adds unnecessary words. * Correct: "The leader proposes blocks by draining the mempool." * Incorrect: "Blocks are proposed by the leader by draining the mempool." #### Short sentences One idea per sentence. If a sentence has more than one comma, split it. Follow a long sentence with a short one. * Correct: "Each project group operates on its own copy-on-write overlay store. This ensures isolation between projects." * Incorrect: "Each project group operates on its own copy-on-write overlay store that can read the base state plus any account changes from Phase 1 via the snapshot store, which ensures isolation between projects." #### Gender-neutral language Use "they" as a singular pronoun. Address groups as "developers," "users," or "validators." #### No emojis Do not use emojis in documentation. Color and visual interest come from the [shape system](/design/shapes), not emoji. *** ### Spelling and Terminology #### Makechain-specific terms Use these terms consistently: | Term | Usage | Not | | ------------ | ------------------------------------------------------------------------------------------------------------------- | ------------------------------------- | | Make ID | The account identifier. Abbreviate as `mid` in code contexts. | MakeID, make-id | | message | Lowercase when referring to the concept. | Message (unless starting a sentence) | | message type | Refer to specific types in `SCREAMING_SNAKE_CASE` with backticks: `PROJECT_CREATE` | ProjectCreate, project\_create | | project ID | Lowercase "ID." Always note it is content-addressed (BLAKE3 hash of the creation message). | Project Id, projectId | | ref | A branch or tag pointer. Plural: "refs." | reference, branch (unless clarifying) | | scope | Key permission level. Three scopes: OWNER, SIGNING, AGENT. Show in ALL CAPS without backticks when used as a label. | scope level, permission | | DA layer | Data availability layer. Spell out on first use per page, then abbreviate. | data layer, blob store | | state root | The BLAKE3 merkle root of all state. No hyphen. | stateroot, state-root | | mempool | One word, lowercase. | mem-pool, memory pool | | consensus | Lowercase unless starting a sentence. Refer to the specific algorithm as "Simplex BFT." | Consensus | #### External product casing Match the canonical casing of external tools and protocols: * Ed25519 (not ed25519 or ED25519) * BLAKE3 (not blake3 or Blake3) * gRPC (not GRPC or Grpc) * grpc-web (lowercase with hyphen) * protobuf (lowercase) * Rust (capitalized) * rayon (lowercase — it's a crate name) * Cloudflare (capitalized) * Ethereum (capitalized), but `ETH_ADDRESS` in code * Solana (capitalized), but `SOL_ADDRESS` in code #### Abbreviations Spell out abbreviations on first use per page, followed by the abbreviation in parentheses: * "data availability (DA) layer" * "Byzantine Fault Tolerant (BFT) consensus" * "compare-and-swap (CAS)" These abbreviations are acceptable without expansion: HTTP, gRPC, URL, API, CLI, SDK, CI/CD, hex. Do not use Latin abbreviations. Write "for example" instead of "e.g." and "that is" instead of "i.e." #### Numbers and units * Byte counts are explicit: "32 bytes," "64 bytes" * Hash sizes: "BLAKE3 (32 bytes)" on first mention per page * Time: use "ms" for milliseconds, "s" for seconds — "\~200ms block time" * Throughput: "10,000+ messages per second" * Storage: use "GB" for gigabytes, "KB" for kilobytes * Hex values: lowercase, no `0x` prefix unless referencing a state key prefix — "prefix `0x01`" *** ### Formatting #### Headings One H1 per page — the page title. No shape prefix on H1. All H2 headings get a shape prefix using an inline ``: ```mdx ## Section Title ``` H3 headings are plain text — no shape, no decoration. Do not skip heading levels (H2 → H4). Use sentence case for all headings: * Correct: `## State root computation` * Incorrect: `## State Root Computation` Exception: capitalize product names in headings — "Creating your first EAS build," "Configuring Simplex BFT." #### Shape assignment Cycle through the five brand shapes for H2s within a page: 1. square (`#00EEBE`) — primary concept 2. circle (`#7A3BF7`) — secondary / supporting 3. triangle (`#FA7CFA`) — technical detail 4. star (`#FAD030`) — configuration / options 5. heart (`#FE0302`) — supplementary / coda For pages with more than 5 sections, pull from the [extended shape set](/design/shapes): diamond, hexagon, bolt, shield, sparkle, leaf, flame. #### Inline code Use backticks for: * Message types: `PROJECT_CREATE`, `COMMIT_BUNDLE` * Field names: `project_id`, `old_hash`, `da_reference` * Hex prefixes: `0x01`, `0x0A` * CLI commands: `cargo test`, `bun run build` * RPC methods: `SubmitMessage`, `GetProject` * Rust types and crate names: `MemoryStore`, `commonware-consensus` Do not use backticks for: * Product names: Makechain, Simplex BFT, Commonware * Scope labels: OWNER, SIGNING, AGENT (use ALL CAPS plain text) * File names and directories — use **bold** instead: **app.json**, **src/state/** #### File and directory names Use **bold** for file names, directory names, and file extensions in prose: * Correct: "Your protocol buffer definition is in **proto/makechain.proto**." * Incorrect: "Your protocol buffer definition is in `proto/makechain.proto`." #### Code blocks Always specify the language for fenced code blocks: ```` ```rust let project_id = blake3::hash(&message_bytes); ``` ```` Use `bash` for shell commands, `rust` for Rust code, `json` for JSON, and plain triple backticks (no language) for ASCII diagrams and pseudocode. #### Tables Use markdown tables for structured reference data. Tables are the primary format for: * Message type lists with descriptions and scopes * Configuration parameters with defaults * State key prefixes and namespaces * Storage limits * Error types with triggers Always include a header row with separator: ```markdown | Type | Description | Scope | |------|-------------|-------| | `PROJECT_CREATE` | Create a new project | SIGNING | ``` #### Lists Use dashes (`-`) for unordered lists, not asterisks. Start numbered lists at `1`. Use **bold** for the lead term in definition-style lists: ```markdown - **Permissionless** — anyone can create projects and push code - **Content-addressed** — project IDs are BLAKE3 hashes of creation messages ``` Use em dashes (—) to separate the term from its definition, not colons or hyphens. #### Links Link descriptive text, not "here" or "this page": * Correct: "See the [storage limits](/protocol/storage-limits) for per-account capacity." * Incorrect: "See storage limits [here](/protocol/storage-limits)." Use relative paths for internal links: `/protocol/overview`, not `https://makechain.pages.dev/protocol/overview`. #### ASCII diagrams Use box-drawing characters for architecture diagrams in plain fenced code blocks: ``` ┌─────────────┐ │ Component │ └──────┬──────┘ │ ┌──────▼──────┐ │ Next Layer │ └─────────────┘ ``` Diagrams should be self-contained and readable without surrounding text. *** ### Protocol Documentation Protocol pages document the specification. They are reference material — precise, complete, and authoritative. #### Describe behavior, not implementation Protocol docs describe what the system does, not how the Rust code implements it. Reference implementation details (crate names, function names) belong in code comments and CLAUDE.md, not in user-facing docs. * Correct: "Account-level messages are applied serially because they modify shared account state." * Incorrect: "Account-level messages are applied serially using the `apply_account_messages` function in `execution.rs`." #### Document the envelope When introducing a message type, always specify: 1. The message type name in `SCREAMING_SNAKE_CASE` 2. The required key scope (OWNER, SIGNING, or AGENT) 3. The conflict key or ordering mechanism (CAS, LWW, append-only) 4. The semantics category (1P or 2P) #### Show the state change For each message type, describe: * **Preconditions** — what must be true for the message to be accepted * **Effect** — what state changes when the message is applied * **Failure modes** — what errors are returned and when #### Use tables for message type reference The canonical format for listing message types: ```markdown | Type | Description | Required Scope | |------|-------------|---------------| | `PROJECT_CREATE` | Create a new project with name and visibility | SIGNING | | `PROJECT_REMOVE` | Remove a project (hides refs, commits, collaborators) | OWNER | ``` #### Conflict resolution rules Always state the conflict resolution rule explicitly: * "On a timestamp tie, remove wins." * "Last-write-wins per conflict key `(project_id, field)`." * "Compare-and-swap: includes expected current hash. If the ref has moved, the update is rejected." *** ### API Documentation API pages document the gRPC service. They are functional reference — developers look things up here while coding. #### RPC method format Document each RPC with: 1. Method name in backticks: `GetProject` 2. Request fields as a table 3. Response fields as a table 4. A curl/grpcurl example when useful 5. Error conditions #### Field descriptions Write useful descriptions. Teach the developer something beyond what the type signature shows: * Correct: "`project_id` — the BLAKE3 hash of the original `PROJECT_CREATE` message (32 bytes, hex-encoded)" * Incorrect: "`project_id` — the project ID" #### Pagination All list endpoints use cursor-based pagination. Document the pattern once and reference it: * `cursor` — opaque string from a previous response. Omit for the first page. * `limit` — maximum items to return. Default 50, maximum 200. #### Streaming endpoints For streaming RPCs (`SubscribeMessages`, `SubscribeBlocks`), document: * The filter parameters * What triggers a message on the stream * Whether the stream replays historical data or is live-only *** ### Page Structure Every documentation page follows this structure: ``` # Page Title ← H1, no shape Introductory paragraph. ← 1-2 sentences establishing context ## First Section ← H2 with shape Content... ### Subsection ← H3, plain Content... ## Second Section ← H2 with shape Content... ``` #### Opening paragraph Start every page with 1-2 sentences that tell the reader what this page covers and why it matters. No preamble, no "In this section we will discuss..." * Correct: "Makechain enforces per-account storage limits to prevent unbounded state growth." * Incorrect: "This page describes the storage limits system. Storage limits are an important part of the protocol." #### One concept per page Each page covers one topic. If you find yourself writing "see also" to another section on the same page, consider whether the content should be its own page. #### End with edges Close pages with edge cases, error types, or future considerations. The reader who reaches the bottom is looking for details. *** ### Punctuation #### Oxford commas Use Oxford commas: "projects, commits, and refs" — not "projects, commits and refs." #### Em dashes Use em dashes (—) to set off parenthetical clauses. No spaces around em dashes: * Correct: "Every operation is a cryptographically signed message — verifiable without external lookups." * Incorrect: "Every operation is a cryptographically signed message - verifiable without external lookups." In MDX, write `—` directly (Unicode em dash). The `—` entity also works. #### Double quotes Use double quotes in prose. Reserve single quotes for nested quotation or code contexts: * Correct: Set the field named "id" to your project's ID. * Incorrect: Set the field named 'id' to your project's ID. #### Possessives Singular possessive: add **'s** regardless of final consonant — "BLAKE3's digest," "the process's state." Plural possessive ending in **s**: add just the apostrophe — "the validators' signatures." #### Slashes No spaces around slashes: "client/server," "Android/iOS." *** ### Glossary Core terms used throughout Makechain documentation. #### Protocol | Term | Definition | | -------------- | -------------------------------------------------------------------------------------------------------------------------------------- | | Message | A signed, self-authenticating operation envelope containing a BLAKE3 hash, Ed25519 signature, signer public key, and operation payload | | Message type | The specific operation: `PROJECT_CREATE`, `COMMIT_BUNDLE`, `REF_UPDATE`, etc. | | 1P (one-phase) | Unilateral state change with no paired undo message. Categories: Singleton, LWW Register, Append-only, State transition | | 2P (two-phase) | Add/Remove pairs operating on a set. Remove wins on timestamp tie | | CAS | Compare-and-swap — optimistic locking where an update includes the expected current value | | LWW | Last-write-wins — the most recent message by consensus order overwrites prior state | | Remove-wins | On a timestamp tie between add and remove, the remove takes precedence | | Conflict key | The tuple that identifies which state slot a message targets, for example `(project_id, field)` | #### Identity | Term | Definition | | --------------- | -------------------------------------------------------------------------------------------------------------- | | Make ID (MID) | Unique account identifier (uint64) assigned by the onchain registry | | Scope | Permission level for a registered key: OWNER (full control), SIGNING (push, manage), AGENT (automated actions) | | Claim signature | Cryptographic proof linking an external address to a Make ID. Message format: `makechain:verify:` | #### Consensus | Term | Definition | | ------------ | ------------------------------------------------------------------------------------ | | Simplex BFT | Single-chain Byzantine Fault Tolerant consensus protocol from the Commonware library | | Block | A batch of messages ordered by consensus. \~200ms block time | | Finality | A block is final after two consecutive blocks are notarized (2-chain rule). \~300ms | | Notarization | A 2/3+ validator vote to accept a proposed block | | Mempool | Queue of validated messages waiting to be included in a block | #### Execution | Term | Definition | | ----------------- | --------------------------------------------------------------------------------------------- | | Account pre-pass | Phase 1: serial execution of account-level messages that modify shared state | | Project execution | Phase 2: parallel execution of project-scoped messages grouped by `project_id` | | Overlay store | Copy-on-write state store providing isolation between parallel project groups | | Snapshot store | Read-only view of base state plus account pre-pass diffs, used as the base for overlay stores | | State root | BLAKE3 merkle root combining all per-project roots in sorted order | #### Storage | Term | Definition | | ------------ | -------------------------------------------------------------------------------------------------------------------------------------------- | | Storage unit | Yearly capacity allocation for an account. Default: 1 (free tier) | | DA layer | Data availability layer — separate storage for file content (blobs, trees), referenced by `da_reference` in commit bundles | | Pruning | Automatic removal of oldest unprotected commit metadata when a project exceeds its limit. Commits referenced by active refs are never pruned | | Ref | A named pointer (branch or tag) to a commit hash | | Fast-forward | A ref update where the new commit is a descendant of the current ref target | #### Infrastructure | Term | Definition | | ---------- | ---------------------------------------------------------------------- | | Commonware | The library of distributed systems primitives that Makechain builds on | | tonic | Rust gRPC framework used for the API layer | | rayon | Rust data-parallelism library used for parallel project execution | | QMDB | Queryable Merkle Database — planned persistent state backend | import * as Demo from '../../components/Demo.tsx' ## Create a Project Generate an Ed25519 keypair, register it with your Make ID, and create a new project. The project ID is the BLAKE3 hash of the `PROJECT_CREATE` message itself — content-addressed from the moment of creation. ### Demo
Network: devnet Finality: ~300ms
} >
### CLI equivalent ```bash # Generate a keypair makechain keygen # Register the key (relayed from onchain registry) makechain register-key --scope signing # Create a project makechain create-project --name my-first-repo --visibility public ``` ### What happened 1. **Key generation** — An Ed25519 keypair is generated locally. The private key never leaves your machine. The public key is 32 bytes. 2. **Key registration** — A `KEY_ADD` message is submitted with your public key and the SIGNING scope. This message is relayed from the onchain registry into the consensus layer so validators can verify your future signatures without querying the chain. 3. **Project creation** — A `PROJECT_CREATE` message is constructed with your chosen name and visibility. The message is BLAKE3-hashed, Ed25519-signed with your private key, and submitted via gRPC. 4. **Consensus** — The message enters the mempool, passes structural validation, and is included in the next block by the current leader. After 2/3+ validators notarize the block and one more block is built on top (2-chain rule), your project is final. The project ID is the BLAKE3 hash of the `PROJECT_CREATE` message envelope — it is deterministic, content-addressed, and globally unique. import * as Demo from '../../components/Demo.tsx' ## Fork a Project Fork an existing project at a specific commit. The forked project gets a new content-addressed ID (the BLAKE3 hash of the `FORK` message) and inherits the source project's refs and commit history at that point. ### Demo
Source: alice/web-framework Semantics: 1P Singleton
} >
FORK is a 1P Singleton — once created, it cannot be undone. The new project ID is the BLAKE3 hash of this message.
### CLI equivalent ```bash # Fork a project at a specific commit makechain fork \ --source e7f8a9b0c1d2... \ --commit deadbeef0123... \ --name my-web-framework \ --visibility public ``` ### What happened 1. **Read source** — You query the source project to find the commit you want to fork at. The source project must be accessible to you (public, or you are a collaborator). 2. **Fork point** — The `source_commit_hash` anchors the fork to a precise point in the source project's history. This is recorded permanently in the fork's metadata. 3. **FORK message** — A `FORK` message is submitted. This is a 1P Singleton — it creates a new resource irreversibly. The new project ID is the BLAKE3 hash of the `FORK` message itself (not the source project). This guarantees a globally unique, content-addressed ID. 4. **Account pre-pass** — `FORK` is processed in the serial account pre-pass (Phase 1) because it modifies shared account state (`project_count`). The protocol checks `project_count < storage_units * 10` before allowing the fork. If you are at capacity, the fork is rejected with `StorageLimitExceeded`. #### Cross-shard note In a future sharded architecture, `FORK` is the one operation that requires cross-shard coordination. The `FORK` message includes a state proof from the source project to verify the `source_commit_hash` exists without querying the source shard. ## Demos Interactive walkthroughs of core Makechain operations. Each demo shows the full message lifecycle — from construction to consensus finality. import * as Demo from '../../components/Demo.tsx' ## Manage Access Add collaborators to a project, assign permission levels, and manage the access control list. Collaborators are a 2P set — add and remove pairs with remove-wins semantics. ### Demo
Project: my-first-repo Semantics: 2P Set (remove-wins)
} >
### Permission levels
OWNER
Full control — transfer, archive, delete, manage all collaborators
ADMIN
Manage collaborators, update project metadata, all write operations
WRITE
Push commits, update refs, create branches and tags
READ
View project state, refs, and commits (relevant for private projects)
### Update permissions
Updating an existing collaborator's permission reuses COLLABORATOR\_ADD. The count does not change — only new collaborators increment the count.
### Remove a collaborator
### How access control works * **COLLABORATOR\_ADD** requires the signer to have SIGNING scope and at least ADMIN permission on the project * **COLLABORATOR\_REMOVE** has the same requirements * The collaborator set is a **2P set** — on a timestamp tie between add and remove for the same collaborator, remove wins * Permission updates (re-adding with a different level) do not change the collaborator count * Each project supports up to **50 collaborators** per storage unit import * as Demo from '../../components/Demo.tsx' ## Push Commits Bundle commit metadata, upload content to the DA layer, and update refs — all in a single atomic flow. Consensus orders the operations and the ref update uses compare-and-swap to prevent conflicts. ### Demo
Project: my-first-repo Ref: refs/heads/main
} >
### CLI equivalent ```bash # Push content (bundles commits + updates ref in one operation) makechain push --project a1b2c3d4... --ref refs/heads/main ``` ### What happened 1. **DA upload** — File content (blobs and tree structures) is uploaded to the data availability layer. The consensus layer never sees the raw content — only a `da_reference` pointing to it. 2. **Commit bundle** — A `COMMIT_BUNDLE` message declares the new commit metadata: hash, parent hashes, tree root, author, and title. Commits are ordered parent-first within the bundle. The required scope is AGENT, allowing CI/CD systems and automated tooling to push on behalf of users. 3. **Ref update** — A `REF_UPDATE` message moves `refs/heads/main` to the new commit. It includes the expected current hash (`old_hash`) for compare-and-swap. If another push landed between your read and write, the CAS check fails and the update is rejected — no silent overwrites. The update must be fast-forward (new commit descends from old) unless `force: true`. 4. **Parallel execution** — Both messages are grouped by `project_id` and executed together in the project's parallel execution group. The overlay store provides copy-on-write isolation from other projects in the same block. import * as Demo from '../../components/Demo.tsx' ## Register a Make ID Every identity on Makechain is anchored to a wallet on Tempo — an EOA, smart wallet, or WebAuthn passkey. You connect your wallet, generate an Ed25519 keypair, and register on the onchain registry. The registry assigns a Make ID, binds it to your wallet address (`owner_address`), and relays a `KEY_ADD` message into the consensus layer. Registration costs gas, providing natural spam resistance. MID ownership is transferable onchain for social recovery and account migration. ### Interactive Demo Registry: onchain contract Relay: KEY_ADD into consensus
} >
The private key is generated locally and never leaves your machine. Deterministic signing — no nonce reuse risk.
The registry event is relayed into the Makechain consensus layer as a KEY\_ADD message. This is the bridge between the onchain registry and the protocol — validators learn about your key without querying the chain directly.
### Add more keys Once your account is active, you can register additional keys with different scopes.
AGENT keys can push commits and update refs but cannot manage collaborators or account settings. Ideal for CI/CD pipelines and AI agents.
### Set your profile
ACCOUNT\_DATA uses LWW Register semantics — the most recent message by consensus order wins per (mid, field) conflict key.
### CLI equivalent ```bash # Generate a keypair makechain keygen # Register on the onchain registry (assigns MID) makechain register # Add a SIGNING key (requires OWNER key to sign) makechain register-key --scope signing # Add an AGENT key makechain register-key --scope agent # Set profile metadata makechain set-account --field username --value alice makechain set-account --field bio --value "Building the future..." ``` ### What happened 1. **Key generation** — An Ed25519 keypair is generated locally. The public key is 32 bytes, the private key never leaves your machine. Ed25519 uses deterministic signing, so there is no nonce reuse risk. 2. **Onchain registration** — You submit a transaction to the Makechain registry contract on Tempo with your public key. The registry assigns a unique Make ID (uint64), binds your wallet as the `owner_address`, and emits an event. The gas cost prevents spam — every account has a real economic anchor. 3. **Relay into consensus** — The registry event is picked up and relayed into the Makechain consensus layer as a `KEY_ADD` message with OWNER scope and your wallet's `owner_address`. This is processed in the account pre-pass (Phase 1, serial) because it modifies shared account state. 4. **Account live** — After finalization (\~300ms), your account exists in consensus state. You can now create projects, push commits, add collaborators, and verify external addresses. Your wallet can always add new Ed25519 keys, and MID ownership can be transferred onchain for social recovery. #### Key scopes | Scope | What it can do | Typical use | | ------- | ------------------------------------------------------------- | ---------------------------- | | OWNER | Everything — manage keys, transfer projects, delete account | Your primary key | | SIGNING | Push commits, update refs, manage collaborators, set metadata | Day-to-day development | | AGENT | Push commits and update refs only | CI/CD, AI agents, automation | Each account can have up to **50 keys**. Keys are a 2P set — use `KEY_REMOVE` to revoke a compromised key. On a timestamp tie, remove wins. import * as Demo from '../../components/Demo.tsx' import { MidInput, SignStepWithPreview, PreviewStep } from '../../components/wallet/VerifyIdentityDemo.tsx' ## Verify Identity Link an external address (Ethereum or Solana) to your Make ID by signing a deterministic challenge message. The claim is verified on-chain and stored in consensus state. ### Interactive Demo Type: ETH_ADDRESS Scheme: EIP-191 personal_sign
} > ### Solana variant
### How verification works #### Ethereum (ETH\_ADDRESS) 1. You sign the challenge `makechain:verify:` using [EIP-191](https://eips.ethereum.org/EIPS/eip-191) `personal_sign` 2. The validator recovers the signer address from the signature using secp256k1 + keccak256 3. If the recovered address matches the `address` field, the verification is accepted #### Solana (SOL\_ADDRESS) 1. You sign the challenge `makechain:verify:` with your Solana keypair 2. The validator verifies the Ed25519 signature directly — the Solana address is the public key 3. If the signature is valid, the verification is accepted #### Removal Verifications are a 2P set. Submit `VERIFICATION_REMOVE` to unlink an address. On a timestamp tie between add and remove, remove wins. ## Examples Working examples using the CLI, grpcurl, JavaScript, and monitoring endpoints. ### CLI workflows #### Register, create, query Start a local node: ```bash cargo run --bin node -- --seed 1 --network devnet ``` Generate a keypair: ```bash cargo run --bin cli -- keygen # Secret: a1b2c3... (64 hex chars) # Public: d4e5f6... (64 hex chars) ``` Register the key and create a project: ```bash cargo run --bin cli -- register-key --secret a1b2c3... --mid 1 cargo run --bin cli -- create-project \ --secret a1b2c3... \ --mid 1 \ --name "my-project" \ --visibility public \ --description "My first Makechain project" ``` Query: ```bash cargo run --bin cli -- get-project-by-name --mid 1 --name "my-project" cargo run --bin cli -- list-projects --owner 1 ``` #### Other queries ```bash cargo run --bin cli -- get-account --mid 1 cargo run --bin cli -- list-refs --project cargo run --bin cli -- list-commits --project cargo run --bin cli -- list-collaborators --project cargo run --bin cli -- list-verifications --mid 1 cargo run --bin cli -- list-keys --mid 1 cargo run --bin cli -- search-projects --query "my-" --limit 10 cargo run --bin cli -- project-activity --project --limit 20 ``` #### Blocks and status ```bash cargo run --bin cli -- get-block --number 42 cargo run --bin cli -- list-blocks --limit 10 cargo run --bin cli -- status cargo run --bin cli -- stats cargo run --bin cli -- mempool-info ``` *** ### gRPC with grpcurl Server reflection is enabled, so grpcurl discovers services at runtime. ```bash grpcurl -plaintext localhost:50051 list grpcurl -plaintext localhost:50051 list makechain.MakechainService ``` #### Queries ```bash grpcurl -plaintext \ -d '{"project_id": "aabbccdd..."}' \ localhost:50051 makechain.MakechainService/GetProject grpcurl -plaintext \ -d '{"mid": 1}' \ localhost:50051 makechain.MakechainService/GetAccount # Pagination grpcurl -plaintext \ -d '{"limit": 10}' \ localhost:50051 makechain.MakechainService/ListProjects grpcurl -plaintext \ -d '{"limit": 10, "cursor": ""}' \ localhost:50051 makechain.MakechainService/ListProjects ``` #### Streaming ```bash # All messages grpcurl -plaintext -d '{}' \ localhost:50051 makechain.MakechainService/SubscribeMessages # Filter by type grpcurl -plaintext \ -d '{"message_type": "MESSAGE_TYPE_COMMIT_BUNDLE"}' \ localhost:50051 makechain.MakechainService/SubscribeMessages # Filter by project grpcurl -plaintext \ -d '{"project_id": "aabbccdd..."}' \ localhost:50051 makechain.MakechainService/SubscribeMessages ``` *** ### JavaScript / TypeScript The node supports grpc-web for browser clients. #### Using `@connectrpc/connect-web` ```typescript import { createClient } from "@connectrpc/connect"; import { createGrpcWebTransport } from "@connectrpc/connect-web"; import { MakechainService } from "./gen/makechain_connect"; const transport = createGrpcWebTransport({ baseUrl: "http://localhost:50051", }); const client = createClient(MakechainService, transport); // Get a project const project = await client.getProject({ projectId: new Uint8Array(/* 32-byte project ID */), }); console.log(project.name, project.visibility); // List projects const projects = await client.listProjects({ ownerMid: 1n, limit: 50, }); for (const p of projects.projects) { console.log(p.name); } // Stream messages for await (const msg of client.subscribeMessages({})) { console.log("New message:", msg.data?.type); } ``` #### Using `grpc-web` directly ```typescript import { MakechainServiceClient } from "./gen/makechain_grpc_web_pb"; import { GetProjectRequest } from "./gen/makechain_pb"; const client = new MakechainServiceClient("http://localhost:50051"); const req = new GetProjectRequest(); req.setProjectId(new Uint8Array(/* 32-byte project ID */)); client.getProject(req, {}, (err, response) => { if (err) { console.error(err.message); return; } console.log("Project:", response.getName()); }); ``` *** ### Monitoring #### Health endpoints On the metrics port (default 9090): ```bash curl http://localhost:9090/healthz # Liveness curl http://localhost:9090/readyz # Readiness ``` #### Prometheus ```bash curl http://localhost:9090/metrics # Example output: # makechain_messages_submitted_total{type="PROJECT_CREATE"} 42 # makechain_blocks_committed_total 1337 # makechain_mempool_size 15 # makechain_gossip_broadcast_total{outcome="success"} 500 # makechain_active_subscriptions 3 ``` ## API Reference Makechain exposes a single gRPC service (`MakechainService`) for reading and writing state. The service supports grpc-web for browser clients and server reflection for runtime discovery. ### Write Operations Submit signed messages for inclusion in the consensus pipeline. | RPC | Description | | --------------------- | ----------------------------------------------------------- | | `SubmitMessage` | Submit a single signed message (verify, validate, mempool) | | `BatchSubmitMessages` | Submit multiple signed messages atomically | | `DryRunMessage` | Validate a message against current state without submitting | ### Read Operations Query the current state of projects, accounts, refs, and commits. All list operations support cursor-based pagination (max 200 items per page). #### Projects | RPC | Description | | -------------------- | ------------------------------------------------------- | | `GetProject` | Get project metadata and status by project ID | | `GetProjectByName` | Look up a project by owner MID and project name | | `SearchProjects` | Search projects by name prefix with pagination | | `ListProjects` | List projects with optional owner filter and pagination | | `GetProjectActivity` | Recent messages for a specific project | #### Git Objects | RPC | Description | | -------------------- | ------------------------------------------------- | | `GetRef` | Get a single ref by project ID and ref name | | `ListRefs` | List all refs in a project with pagination | | `GetRefLog` | Get the update history of a ref | | `GetCommit` | Get commit metadata by project ID and commit hash | | `ListCommits` | List commits in a project with pagination | | `GetCommitAncestors` | Walk the commit graph and return ancestor chain | | `ListCollaborators` | List project collaborators with pagination | #### Accounts | RPC | Description | | -------------------- | --------------------------------------------------------------------------- | | `GetAccount` | Get account metadata, keys, storage units, project count, and verifications | | `GetAccountByKey` | Look up an account by its Ed25519 public key | | `GetAccountActivity` | Recent messages for a specific account | | `GetKey` | Inspect a single key entry (scope, status, allowed projects) | | `ListKeys` | List all keys registered to an account with pagination | | `ListVerifications` | List verified external addresses for an account | #### Blocks & Messages | RPC | Description | | -------------- | ------------------------------------------------------------------- | | `GetBlock` | Get a committed block by block number (includes transaction chunks) | | `ListBlocks` | List recent committed blocks (newest first) | | `GetMessage` | Look up a committed message by its BLAKE3 hash | | `ListMessages` | List committed messages across a range of blocks | ### Node Operations | RPC | Description | | ----------------- | -------------------------------------------------------------------------------- | | `GetNodeStatus` | Current block height, mempool size, pending blocks, network, version, and uptime | | `GetHealth` | Liveness and readiness probe for load balancers | | `GetChainStats` | Cumulative chain analytics (total messages, projects, accounts, blocks) | | `GetSnapshotInfo` | Current snapshot status (block number, entry count, state root) | | `GetMempoolInfo` | Mempool size and per-type message counts | ### Streaming | RPC | Description | | ------------------- | --------------------------------------------- | | `SubscribeMessages` | Server-streaming RPC for live message updates | | `SubscribeBlocks` | Server-streaming RPC for live block updates | `SubscribeMessages` supports filtering by: * **`project_id`** — only receive messages for a specific project * **`types`** — only receive specific message types (e.g., only `COMMIT_BUNDLE`) ### Connection The default gRPC endpoint is `localhost:50051`. Use `--grpc-addr` to configure. ```bash # gRPC (native clients) grpcurl -plaintext localhost:50051 list # CLI client cargo run --bin cli -- --endpoint http://localhost:50051 get-account --mid 1 ``` ### REST Gateway A Cloudflare Workers gateway translates HTTP REST requests into gRPC calls. This is the recommended way for browser clients, mobile apps, and any HTTP-native integration to interact with makechain. * All endpoints return JSON * Input validated with Zod schemas * SSE streaming for real-time message and block updates * gRPC-web passthrough for clients that prefer raw protobuf See the [REST API reference](/api/rest) for complete endpoint documentation, or try the [interactive API explorer](https://api.makechain.net/reference) to call endpoints directly from your browser. ### grpc-web Browser clients can also connect via grpc-web (HTTP/1.1) directly. The node accepts HTTP/1.1 requests and translates them to gRPC internally via `tonic-web`. CORS headers are configured to allow cross-origin requests. ## REST API The Cloudflare Workers gateway translates HTTP REST requests to the underlying gRPC service. All endpoints return JSON and accept standard query parameters. **Base URL:** `https://api.makechain.net` **Interactive docs:** [`api.makechain.net/reference`](https://api.makechain.net/reference) — try endpoints directly in the browser. ### Projects #### GET /v1/projects/:id Get a project by its 32-byte hex ID. ```bash curl https://api.makechain.net/v1/projects/a1b2c3d4... ``` **Response:** ```json { "project_id": "a1b2c3d4...", "name": "my-project", "description": "A sample project", "license": "MIT", "visibility": "public", "owner_mid": 42, "status": "active", "fork_source": null, "ref_count": 3, "collaborator_count": 2, "commit_count": 47, "max_refs": 100, "max_collaborators": 50, "max_commits": 10000 } ``` #### GET /v1/projects/by-name/:mid/:name Look up a project by owner MID and name. ```bash curl https://api.makechain.net/v1/projects/by-name/42/my-project ``` #### GET /v1/projects List projects. Supports pagination and owner filtering. | Parameter | Type | Description | | ----------- | ------ | --------------------------------- | | `owner_mid` | string | Filter by owner account MID | | `limit` | string | Max results per page (1-200) | | `cursor` | string | Hex cursor from previous response | ```bash curl "https://api.makechain.net/v1/projects?owner_mid=42&limit=10" ``` **Response:** ```json { "projects": [{ "project_id": "...", "name": "...", ... }], "next_cursor": "abcd1234..." } ``` #### GET /v1/projects/search Search projects by name prefix. | Parameter | Type | Description | | ----------- | ------ | ------------------------- | | `query` | string | Name prefix to search for | | `owner_mid` | string | Optional owner filter | | `limit` | string | Max results per page | | `cursor` | string | Pagination cursor | #### GET /v1/projects/:id/activity Recent activity feed for a project. | Parameter | Type | Description | | --------- | ------ | --------------------- | | `limit` | string | Max entries to return | ### Refs #### GET /v1/projects/:id/refs List refs in a project (branches and tags). | Parameter | Type | Description | | --------- | ------ | -------------------- | | `limit` | string | Max results per page | | `cursor` | string | Pagination cursor | **Response:** ```json { "refs": [ { "project_id": "a1b2c3d4...", "ref_name": "refs/heads/main", "ref_type": "branch", "hash": "c0ff33de...", "nonce": 5 } ], "next_cursor": null } ``` #### GET /v1/projects/:id/refs/:name Get a single ref by name. #### GET /v1/projects/:id/refs/:name/log Get the update history of a ref. | Parameter | Type | Description | | --------- | ------ | --------------------- | | `limit` | string | Max entries to return | **Response:** ```json { "entries": [ { "old_hash": "aabbccdd...", "new_hash": "c0ff33de...", "mid": 42, "timestamp": 1740000100, "block_number": 1312, "force": false, "nonce": 5 } ] } ``` ### Commits #### GET /v1/projects/:id/commits List commits in a project. | Parameter | Type | Description | | --------- | ------ | -------------------- | | `limit` | string | Max results per page | | `cursor` | string | Pagination cursor | #### GET /v1/projects/:id/commits/:hash Get a single commit by hash. **Response:** ```json { "project_id": "a1b2c3d4...", "commit": { "hash": "c0ff33de...", "parents": ["aabbccdd..."], "tree_root": "11223344...", "author_mid": 42, "author_timestamp": 1740000100, "title": "feat: add user authentication", "message_hash": "eeff0011..." } } ``` #### GET /v1/projects/:id/commits/:hash/ancestors Walk the commit graph and return the ancestor chain. | Parameter | Type | Description | | --------- | ------ | ----------------------- | | `limit` | string | Max ancestors to return | ### Collaborators #### GET /v1/projects/:id/collaborators List project collaborators. | Parameter | Type | Description | | --------- | ------ | -------------------- | | `limit` | string | Max results per page | | `cursor` | string | Pagination cursor | **Response:** ```json { "collaborators": [ { "mid": 87, "permission": "write" } ], "next_cursor": null } ``` ### Accounts #### GET /v1/accounts/:mid Get account by MID (Make ID). **Response:** ```json { "mid": 42, "username": "alice", "avatar": "", "bio": "", "website": "", "keys": [ { "key": "abcd1234...", "scope": "owner", "allowed_projects": [] } ], "storage_units": 1, "project_count": 3, "verifications": [ { "verification_type": "eth_address", "address": "d8da6bf2...", "chain_id": "01" } ] } ``` #### GET /v1/accounts/by-key/:key Look up an account by its Ed25519 public key (64-char hex). #### GET /v1/accounts/:mid/keys List all keys registered to an account. | Parameter | Type | Description | | --------- | ------ | -------------------- | | `limit` | string | Max results per page | | `cursor` | string | Pagination cursor | #### GET /v1/accounts/:mid/keys/:key Get a single key entry. #### GET /v1/accounts/:mid/verifications List verified external addresses for an account. | Parameter | Type | Description | | --------- | ------ | -------------------- | | `limit` | string | Max results per page | | `cursor` | string | Pagination cursor | #### GET /v1/accounts/:mid/activity Recent activity feed for an account. | Parameter | Type | Description | | --------- | ------ | --------------------- | | `limit` | string | Max entries to return | ### Blocks #### GET /v1/blocks/:number Get a committed block by number. **Response:** ```json { "block": { "hash": "deadbeef...", "header": { "block_number": 1312, "timestamp": 1740000205, "version": 1, "network": 3, "parent_hash": "aabbccdd...", "state_root": "11223344..." } }, "message_count": 5 } ``` #### GET /v1/blocks List recent blocks. | Parameter | Type | Description | | --------- | ------ | --------------------- | | `start` | string | Starting block number | | `limit` | string | Max blocks to return | ### Messages #### GET /v1/messages/:hash Look up a committed message by its BLAKE3 hash (64-char hex). #### GET /v1/messages List committed messages across a block range. | Parameter | Type | Description | | ------------- | ------ | ---------------------- | | `start_block` | string | Starting block number | | `end_block` | string | Ending block number | | `limit` | string | Max messages to return | ### Write Endpoints #### POST /v1/messages/submit Submit a signed protobuf `Message` for consensus. The request body is the raw protobuf-encoded `Message` bytes. **Response:** ```json { "hash": "c0ff33de...", "accepted": true, "error": "" } ``` #### POST /v1/messages/batch Submit up to 100 signed messages atomically. The request body is a protobuf-encoded `BatchSubmitRequest`. **Response:** ```json { "results": [ { "hash": "c0ff33de...", "accepted": true, "error": "" }, { "hash": "deadbeef...", "accepted": false, "error": "duplicate message" } ], "accepted_count": 1, "rejected_count": 1 } ``` #### POST /v1/messages/dry-run Validate a message against the current state without submitting. **Response:** ```json { "would_accept": true, "error": "", "error_stage": "" } ``` ### Chain Status #### GET /v1/chain/stats Cumulative chain analytics. #### GET /v1/chain/status Current node status (block height, mempool, network, version, uptime). #### GET /v1/chain/snapshot Snapshot export status. #### GET /v1/chain/mempool Mempool breakdown by message type. **Response:** ```json { "total": 12, "capacity": 10000, "account_messages": 3, "project_messages": 9, "type_breakdown": [ { "message_type": "COMMIT_BUNDLE", "count": 5 }, { "message_type": "REF_UPDATE", "count": 4 } ], "oldest_timestamp": 1740000100, "newest_timestamp": 1740000500 } ``` #### GET /v1/health Liveness and readiness probe. ### Streaming (SSE) Server-Sent Events endpoints for real-time updates. These bridge gRPC server streaming to browser-friendly SSE. #### GET /v1/subscribe/messages Stream finalized messages as SSE events. | Parameter | Type | Description | | ------------ | ------ | ---------------------------------------------- | | `project_id` | string | Optional: filter to one project (64-char hex) | | `types` | string | Optional: comma-separated message type numbers | ```bash curl -N "https://api.makechain.net/v1/subscribe/messages?types=20,10" ``` **SSE format:** ``` event: message data: {"hash":"c0ff33de...","signer":"abcd1234...","type":20,"type_name":"COMMIT_BUNDLE","mid":42,"timestamp":1740000100,"project_id":"a1b2c3d4..."} ``` #### GET /v1/subscribe/blocks Stream block finalization events. ```bash curl -N "https://api.makechain.net/v1/subscribe/blocks" ``` **SSE format:** ``` event: block data: {"block_number":1312,"hash":"deadbeef...","timestamp":1740000205,"state_root":"11223344...","message_count":5} ``` ### gRPC-web Passthrough For clients that prefer raw gRPC-web, the gateway proxies all requests to `/makechain.MakechainService/*` directly to the node. ```bash # Example using grpcurl through the gateway grpcurl -plaintext api.makechain.net:443 makechain.MakechainService/GetHealth ``` ### Input Validation All path parameters and query strings are validated with Zod schemas. Invalid inputs return a structured 400 error: ```json { "error": "validation error", "issues": [ { "path": "id", "message": "must be a 64-character hex string (32 bytes)" } ] } ``` ### Error Format gRPC errors are translated to HTTP status codes: | gRPC Code | HTTP Status | Meaning | | --------- | ----------- | ------------------ | | 3 | 400 | Invalid argument | | 5 | 404 | Not found | | 6 | 409 | Already exists | | 7 | 403 | Permission denied | | 8 | 429 | Resource exhausted | | 16 | 401 | Unauthenticated | All errors return: ```json { "error": "human-readable error message", "code": 5 } ``` ## RPC Reference Complete reference for all `MakechainService` gRPC methods. All byte fields use raw bytes in gRPC and hex encoding in the CLI. ### Pagination List operations support cursor-based pagination. Pass `limit` (max 200) and receive a `next_cursor` in the response. Pass `next_cursor` as `cursor` in the next request to fetch the next page. An empty `next_cursor` means no more results. ### Error Handling All RPCs return standard gRPC status codes: | Code | Meaning | | -------------------- | --------------------------------------------- | | `NOT_FOUND` | Requested resource doesn't exist | | `INVALID_ARGUMENT` | Malformed request (e.g., invalid hash length) | | `RESOURCE_EXHAUSTED` | Rate limit exceeded | | `INTERNAL` | Server-side error (state lock poisoned, etc.) | *** ### Write Operations #### SubmitMessage Submit a single signed message for consensus inclusion. ``` rpc SubmitMessage(SubmitMessageRequest) returns (SubmitMessageResponse) ``` **Request:** A fully signed `Message` (with `hash`, `signature`, `signer`, and `data` fields populated). **Response:** * `hash` (bytes) — BLAKE3 hash of the accepted message * `accepted` (bool) — whether the message was added to the mempool * `error` (string) — error description if rejected **Rejection reasons:** invalid signature, failed structural validation, failed state pre-check (unknown key, wrong scope), rate limited, mempool full, duplicate hash. #### BatchSubmitMessages Submit up to 100 signed messages atomically. Each message is validated independently. ``` rpc BatchSubmitMessages(BatchSubmitRequest) returns (BatchSubmitResponse) ``` **Request:** `messages` — array of signed `Message` objects (max 100). **Response:** * `results` — per-message result (hash, accepted, error) * `accepted_count` / `rejected_count` — summary counters #### DryRunMessage Validate a message against current state without adding it to the mempool. ``` rpc DryRunMessage(DryRunMessageRequest) returns (DryRunMessageResponse) ``` **Request:** A signed `Message`. **Response:** * `valid` (bool) — would this message be accepted? * `error` (string) — validation error if invalid *** ### Projects #### GetProject Get project metadata by project ID. ``` rpc GetProject(GetProjectRequest) returns (GetProjectResponse) ``` **Request:** `project_id` (32 bytes) **Response:** Project metadata including `name`, `description`, `license`, `visibility`, `owner_mid`, `status` ("active"/"archived"/"removed"), `fork_source`, `ref_count`, `collaborator_count`, `commit_count`, and per-project limits (`max_refs`, `max_collaborators`, `max_commits`). #### GetProjectByName Look up a project by owner MID and project name. ``` rpc GetProjectByName(GetProjectByNameRequest) returns (GetProjectResponse) ``` **Request:** * `owner_mid` (uint64) * `name` (string) **Response:** Same as `GetProject`. #### SearchProjects Search projects by name prefix. ``` rpc SearchProjects(SearchProjectsRequest) returns (ListProjectsResponse) ``` **Request:** * `query` (string) — name prefix to match * `owner_mid` (uint64) — filter by owner (0 = all owners) * `limit` (uint32) — max results (default 50, max 200) **Response:** `projects` array + `next_cursor` for pagination. #### ListProjects List all projects with optional owner filter. ``` rpc ListProjects(ListProjectsRequest) returns (ListProjectsResponse) ``` **Request:** * `owner_mid` (uint64) — filter by owner (0 = all) * `limit` (uint32) * `cursor` (bytes) — pagination cursor #### GetProjectActivity Get recent messages for a project. ``` rpc GetProjectActivity(GetProjectActivityRequest) returns (GetProjectActivityResponse) ``` **Request:** * `project_id` (32 bytes) * `limit` (uint32) — max messages (default 50) * `types` (repeated MessageType) — filter by type (empty = all) **Response:** `messages` — array of `MessageEntry` (hash, type, timestamp, mid, signer). *** ### Refs #### GetRef Get a single ref by project and ref name. ``` rpc GetRef(GetRefRequest) returns (GetRefResponse) ``` **Request:** * `project_id` (32 bytes) * `ref_name` (bytes) — e.g., `refs/heads/main` **Response:** `project_id`, `ref_name`, `ref_type` (BRANCH/TAG), `hash` (current commit), `nonce`. #### ListRefs List all refs in a project. ``` rpc ListRefs(ListRefsRequest) returns (ListRefsResponse) ``` **Request:** `project_id`, `limit`, `cursor` **Response:** `refs` array + `next_cursor`. #### GetRefLog Get the update history of a ref (like `git reflog`). ``` rpc GetRefLog(GetRefLogRequest) returns (GetRefLogResponse) ``` **Request:** * `project_id` (32 bytes) * `ref_name` (bytes) * `limit` (uint32) **Response:** `entries` — array of `RefLogEntry` (nonce, old\_hash, new\_hash, timestamp, mid). *** ### Commits #### GetCommit Get commit metadata by hash. ``` rpc GetCommit(GetCommitRequest) returns (GetCommitResponse) ``` **Request:** `project_id` (32 bytes), `commit_hash` (32 bytes) **Response:** `CommitMeta` with `hash`, `parents`, `tree_root`, `author_mid`, `author_timestamp`, `title`, `message_hash`. #### ListCommits List commits in a project. ``` rpc ListCommits(ListCommitsRequest) returns (ListCommitsResponse) ``` **Request:** `project_id`, `limit`, `cursor` **Response:** `commits` array + `next_cursor`. #### GetCommitAncestors Walk the first-parent commit ancestry chain. ``` rpc GetCommitAncestors(GetCommitAncestorsRequest) returns (GetCommitAncestorsResponse) ``` **Request:** * `project_id` (32 bytes) * `commit_hash` (32 bytes) — starting commit * `limit` (uint32) — max ancestors to return **Response:** `ancestors` — array of `CommitMeta` in reverse chronological order. *** ### Collaborators #### ListCollaborators List collaborators for a project. ``` rpc ListCollaborators(ListCollaboratorsRequest) returns (ListCollaboratorsResponse) ``` **Request:** `project_id`, `limit`, `cursor` **Response:** `collaborators` — array of `CollaboratorEntry` (mid, permission, added\_at) + `next_cursor`. *** ### Accounts #### GetAccount Get account metadata and summary. ``` rpc GetAccount(GetAccountRequest) returns (GetAccountResponse) ``` **Request:** `mid` (uint64) **Response:** `mid`, `storage_units`, `project_count`, `key_count`, `verification_count`, `username`, `bio`, `avatar`, `website`. #### GetAccountByKey Look up an account by its Ed25519 public key. ``` rpc GetAccountByKey(GetAccountByKeyRequest) returns (GetAccountResponse) ``` **Request:** `key` (32 bytes — Ed25519 public key) **Response:** Same as `GetAccount`. #### GetAccountActivity Get recent messages authored by an account. ``` rpc GetAccountActivity(GetAccountActivityRequest) returns (GetAccountActivityResponse) ``` **Request:** * `mid` (uint64) * `limit` (uint32) **Response:** `messages` — array of `MessageEntry`. *** ### Keys #### GetKey Inspect a single key entry. ``` rpc GetKey(GetKeyRequest) returns (GetKeyResponse) ``` **Request:** `mid` (uint64), `key` (32 bytes — public key) **Response:** `mid`, `key`, `scope` (OWNER/SIGNING/AGENT), `added_at`, `allowed_projects` (for AGENT-scoped keys). #### ListKeys List all keys registered to an account. ``` rpc ListKeys(ListKeysRequest) returns (ListKeysResponse) ``` **Request:** `mid`, `limit`, `cursor` **Response:** `keys` array + `next_cursor`. *** ### Verifications #### ListVerifications List verified external addresses for an account. ``` rpc ListVerifications(ListVerificationsRequest) returns (ListVerificationsResponse) ``` **Request:** `mid`, `limit`, `cursor` **Response:** `verifications` — array of `VerificationEntry` (address, type, chain\_id, added\_at) + `next_cursor`. *** ### Blocks & Messages #### GetBlock Get a committed block by number. ``` rpc GetBlock(GetBlockRequest) returns (GetBlockResponse) ``` **Request:** `block_number` (uint64) **Response:** `block` (full Block with header, hash, chunks, transactions), `message_count`. #### ListBlocks List recent committed blocks (newest first). ``` rpc ListBlocks(ListBlocksRequest) returns (ListBlocksResponse) ``` **Request:** * `start` (uint64) — starting block number (0 = latest) * `limit` (uint32) — max blocks to return **Response:** `blocks` array. #### GetMessage Look up a committed message by its BLAKE3 hash. ``` rpc GetMessage(GetMessageRequest) returns (GetMessageResponse) ``` **Request:** `hash` (32 bytes) **Response:** `message` (full Message), `block_number` (which block it was committed in). #### ListMessages List committed messages across a range of blocks. ``` rpc ListMessages(ListMessagesRequest) returns (ListMessagesResponse) ``` **Request:** * `start_block` (uint64) — start of range (0 = latest) * `end_block` (uint64) — end of range (0 = same as start) * `limit` (uint32) — max messages **Response:** `messages` — array of `MessageEntry`. *** ### Node Operations #### GetNodeStatus Current node status. ``` rpc GetNodeStatus(GetNodeStatusRequest) returns (GetNodeStatusResponse) ``` **Response:** `block_height`, `mempool_size`, `pending_blocks`, `network` (devnet/testnet/mainnet), `version`, `uptime_seconds`. #### GetChainStats Cumulative chain analytics. ``` rpc GetChainStats(GetChainStatsRequest) returns (GetChainStatsResponse) ``` **Response:** `total_messages`, `total_projects`, `total_accounts`, `total_blocks`, `total_refs`, `total_commits`. #### GetHealth Liveness and readiness probe for load balancers. ``` rpc GetHealth(GetHealthRequest) returns (GetHealthResponse) ``` **Response:** `live` (bool), `ready` (bool), `block_height`. #### GetSnapshotInfo Current snapshot persistence status. ``` rpc GetSnapshotInfo(GetSnapshotInfoRequest) returns (GetSnapshotInfoResponse) ``` **Response:** `block_number`, `entry_count`, `state_root`, `estimated_size_bytes`. #### GetMempoolInfo Mempool size and per-type message breakdown. ``` rpc GetMempoolInfo(GetMempoolInfoRequest) returns (GetMempoolInfoResponse) ``` **Response:** `total_pending`, `by_type` (map of MessageType → count). *** ### Streaming #### SubscribeMessages Server-streaming RPC for live message updates. Receives every message as it's committed to a block. ``` rpc SubscribeMessages(SubscribeRequest) returns (stream Message) ``` **Request:** * `project_id` (bytes) — filter to a specific project (empty = all) * `types` (repeated MessageType) — filter by message type (empty = all) **Stream:** Continuous stream of `Message` objects. #### SubscribeBlocks Server-streaming RPC for live block updates. ``` rpc SubscribeBlocks(SubscribeBlocksRequest) returns (stream GetBlockResponse) ``` **Stream:** Continuous stream of `GetBlockResponse` for each committed block. *** ### Rate Limiting All write operations (`SubmitMessage`, `BatchSubmitMessages`) are rate-limited per account (MID). The default configuration allows 100 burst tokens with 10 tokens/second refill. When rate-limited, the RPC returns `RESOURCE_EXHAUSTED`. Read operations and streaming RPCs are not rate-limited.