HardKAS
HardKAS is a Kaspa-native developer environment for deterministic transaction planning, artifact verification, replay/debugging, RPC diagnostics and early Igra L2 workflows. It gives developers a Hardhat-like workflow without claiming that Kaspa L1 executes EVM or that the simulator implements full consensus.
pnpm install -g @hardkas/cli@alpha hardkas init hardkas node start hardkas accounts list hardkas tx send --from alice --to bob --amount 10
Positioning
| Term | Concrete meaning in HardKAS |
|---|---|
| TxPlan | Unsigned transaction blueprint: selected inputs, intended outputs, fee policy, network, mode and lineage. |
| SignedTx | TxPlan-derived artifact containing signing metadata and a serialized transaction ready for broadcast. |
| Receipt | Observed result of a send operation: txid, status, block/DAG observation, RPC endpoint metadata and parent artifact. |
| Lineage | Provenance block that links artifacts through artifactId, lineageId, rootArtifactId, parentArtifactId and sequence. |
What is HardKAS?
| Subsystem | What it owns | What it does not own |
|---|---|---|
| Artifacts | Canonical serialization, content hashes, schema IDs, lineage metadata and artifact verification. | Consensus finality or wallet custody. |
| Simulator | Developer light-model for DAG conflicts, selected-path movement and transaction displacement. | Full GHOSTDAG, DAGKnight, mining, P2P propagation or security proofs. |
| Query layer | Event log, artifact scan, SQLite index and domain adapters for artifacts, lineage, replay, DAG and events. | A replacement for a block explorer or archive node. |
| L2 tooling | Igra profiles, L2 RPC health, tx build/sign/send, receipts and bridge assumption checks. | Kaspa L1 EVM execution or trustless pre-ZK exits. |
How canonical hashing works
Artifact JSON │ ▼ Remove 13 fields ──→ [contentHash, artifactId, planId, lineage, createdAt, rpcUrl, indexedAt, file_path, file_mtime_ms, hardkasVersion, version, parentArtifactId, signedId] │ ▼ Sort keys recursively │ ▼ BigInt → string │ ▼ SHA-256 │ ▼ contentHash (64-char hex)
| Excluded field | Why it is excluded from the semantic hash |
|---|---|
contentHash | Self-reference. The hash cannot include the field that stores the hash. |
artifactId | Artifact identity often derives from the hash, so including it would create a circular dependency. |
planId | Planning identifier can be generated outside the semantic transaction body. |
lineage | Lineage changes as the artifact enters a chain; the semantic payload should remain hashable without provenance metadata. |
createdAt | Wall-clock time is nondeterministic across machines and CI runs. |
rpcUrl | Endpoint choice is transport metadata, not transaction semantics. |
indexedAt | Indexer timestamp depends on when the query store observed the artifact. |
file_path | Filesystem location is local to a developer machine or CI workspace. |
file_mtime_ms | Filesystem modification time changes when files are copied, checked out or touched. |
hardkasVersion | Tooling version is useful metadata but should not invalidate unchanged semantic content. |
version | Schema version may be tracked separately from the semantic payload hash. |
parentArtifactId | Parent pointer is provenance metadata; the child payload can be checked independently. |
signedId | Signing-stage identity can be derived or assigned outside the unsigned transaction semantics. |
{
"schema": "hardkas.txPlan.v2",
"createdAt": "2026-05-14T10:00:00Z",
"from": "kaspatest:qqalice",
"to": "kaspatest:qqbob",
"amountSompi": "1000000000",
"networkId": "testnet",
"mode": "simulated"
}
// Same object, different createdAt. Hash input is identical:
{"amountSompi":"1000000000","from":"kaspatest:qqalice","mode":"simulated","networkId":"testnet","schema":"hardkas.txPlan.v2","to":"kaspatest:qqbob"}
Artifact format
| Field | Role |
|---|---|
schema | Artifact type identifier, for example hardkas.txPlan.v2. |
networkId | Prevents mixing testnet, mainnet, simnet or L2 artifacts. |
mode | Separates simulated evidence from real network observations. |
contentHash | Integrity seal over the canonical semantic payload. |
artifactId | Identity used by lineage; expected to match contentHash when both are present. |
lineage | Provenance block connecting the artifact to a workflow and its parent. |
{
"schema": "hardkas.txPlan.v2", // artifact type
"hardkasVersion": "0.2.2-alpha.1", // producer version, excluded from hash
"version": "2.0.0", // schema version, excluded from hash
"networkId": "testnet", // target network
"mode": "simulated", // simulated or real
"contentHash": "aaaaaaaa...aaaaaaaa", // SHA-256 canonical payload seal
"lineage": {
"artifactId": "aaaaaaaa...aaaaaaaa", // normally matches contentHash
"lineageId": "111111...111111", // stable workflow id
"rootArtifactId": "aaaaaaaa...aaaaaaaa", // first artifact in flow
"parentArtifactId": null,
"sequence": 0
},
"plan": {
"from": "kaspatest:qqalice",
"to": "kaspatest:qqbob",
"amountSompi": "1000000000",
"feeSompi": "10000",
"inputs": [{ "outpoint": "0f00...abcd:0", "valueSompi": "1000010000" }],
"outputs": [{ "address": "kaspatest:qqbob", "valueSompi": "1000000000" }]
}
}
{
"schema": "hardkas.txReceipt.v2",
"networkId": "testnet",
"mode": "simulated",
"contentHash": "cccccccc...cccccccc",
"lineage": {
"artifactId": "cccccccc...cccccccc",
"lineageId": "111111...111111",
"rootArtifactId": "aaaaaaaa...aaaaaaaa",
"parentArtifactId": "bbbbbbbb...bbbbbbbb",
"sequence": 2
},
"txid": "4d2c...9999",
"status": "accepted",
"observedAtDaaScore": 924211,
"rpc": { "endpointLabel": "local-simnet", "rpcUrl": "http://127.0.0.1:16110" }
}
Lineage chain
┌─────────────┐ contentHash: a1b2c3... │ TxPlan │ planId: plan_001 │ seq: 0 │ parentArtifactId: none └──────┬──────┘ │ ▼ ┌─────────────┐ contentHash: c3d4e5... │ SignedTx │ signedId: signed_001 │ seq: 1 │ parentArtifactId: a1b2c3... └──────┬──────┘ │ ▼ ┌─────────────┐ contentHash: e5f6a7... │ Receipt │ txId: 4d2c...9999 │ seq: 2 │ parentArtifactId: c3d4e5... └─────────────┘
| verifyLineage check | Failure detected |
|---|---|
| Structural integrity | Missing lineage, missing required IDs, or non-64-char hex identifiers. |
| Identity consistency | lineage.artifactId differs from artifact.contentHash. |
| Chain continuity | Child parentArtifactId does not equal parent artifactId. |
| Lineage stability | Child and parent use different lineageId or rootArtifactId. |
| Sequence monotonicity | Child sequence is not greater than parent sequence. |
| Cross-network contamination | Child and parent have different networkId. |
| Mode contamination | Child and parent mix simulated and real modes. |
DAG conflict resolution
Block A (daaScore: 1) Block B (daaScore: 1) └─ tx-1: spends utxo-X └─ tx-2: spends utxo-X │ │ ▼ ▼ ACCEPTED DISPLACED first in deterministic conflict: utxo-X selected-path order already spent by tx-1
| Priority | Rule | Effect |
|---|---|---|
| 1 | Sink ancestry | Blocks in the selected path to the current sink have absolute priority. |
| 2 | Deterministic order | If ancestry does not decide, tie-break by approximate DAA score, then block ID lexicographically. |
| 3 | Transaction ID | If blocks still tie, use txid lexicographical order. |
Query engine
| Domain | Operations | Example |
|---|---|---|
| artifacts | list, inspect, diff, verify | hardkas query artifacts list --mode simulated |
| lineage | chain, transitions, orphans | hardkas query lineage chain --hash a1b2... |
| replay | list, summary, divergences | hardkas query replay divergences |
| dag | conflicts, displaced, anomalies | hardkas query dag conflicts |
| events | list, summary | hardkas query events list --kind tx.plan.created |
| Storage layer | Role |
|---|---|
.hardkas/events.jsonl | Append-only event stream for workflow, RPC, replay and DAG events. |
.hardkas/artifacts/*.json | Canonical artifact files used as source of truth. |
| SQLite query store | Indexed read model over events and artifacts. |
| Filesystem fallback | Used when SQLite is missing, stale or intentionally disabled. |
hardkas query artifacts list --json [ { "artifactId": "aaaaaaaa...aaaaaaaa", "schema": "hardkas.txPlan.v2", "networkId": "testnet", "mode": "simulated", "path": ".hardkas/artifacts/tx-plan-001.json" } ] hardkas query lineage chain cccccccc...cccccccc --json { "rootArtifactId": "aaaaaaaa...aaaaaaaa", "chain": [ { "schema": "hardkas.txPlan.v2", "sequence": 0 }, { "schema": "hardkas.signedTx.v2", "sequence": 1 }, { "schema": "hardkas.txReceipt.v2", "sequence": 2 } ], "ok": true } hardkas query dag displaced --json [ { "txid": "tx-alpha", "reason": "input-spent-by-selected-path", "conflictingTxid": "tx-beta", "outpoint": "7a01...ee:0" } ]
Architecture overview
Kaspa L1 / BlockDAG
- UTXO-based BlockDAG
- Sequencing
- Data availability
- State commitment anchoring
- Finality via Kaspa consensus
- No EVM execution
- No account-based smart contract state on L1
HardKAS
- CLI, SDK and core primitives
- Deterministic simulation
- Artifact lifecycle
- Transaction planner
- Replay engine
- Query store
- RPC diagnostics
- Node orchestrator and localnet helpers
Igra L2
- EVM-compatible execution
- Account-based state
- Contract deployment preflights
- L2 RPC checks
- L2 balance and nonce tooling
- Bridge assumption awareness
- ZK exit as target phase, not immediate guarantee
Published packages
| Package | Purpose |
|---|---|
@hardkas/cli | Command-line interface for project setup, accounts, transactions, artifacts, RPC diagnostics, DAG simulation and L2 workflows. |
@hardkas/sdk | Developer SDK for building programmatic workflows around Kaspa transactions, simulation and artifact inspection. |
@hardkas/core | Core primitives, shared types and deterministic data structures used across the CLI and SDK. |
Repository map
| Folder / package | Purpose |
|---|---|
.github/workflows | CI workflows and repeatable repository checks. |
docs | Architecture notes, security model, artifact model, query layer, simulation model and anti-patterns. |
examples | Runnable examples for transfers, localnet, trace/replay, RPC health, Igra read-only flows and CI demos. |
packages/cli | CLI command surface and command groups. |
packages/sdk | Programmatic developer SDK. |
packages/core | Shared types, primitives and deterministic core utilities. |
accounts | Development account and encrypted dev-keystore flows. |
artifacts | Structured outputs for verification, replay, lineage tracking and debugging. |
kaspa-rpc | Kaspa RPC integration and diagnostics. |
l2 | Igra L2 profiles, transaction helpers, RPC checks and bridge assumption tooling. |
localnet | Local development network helpers. |
node-orchestrator | Node start/status orchestration and environment coordination. |
query-store | Inspection tools for artifacts, lineage, replay results, DAG events and transactions. |
simulator | Deterministic light-model simulation utilities. |
testing | Test helpers and repeatable workflows. |
tx-builder | Transaction construction and planning utilities. |
Artifact lifecycle
hardkas tx plan --from alice --to bob --amount 10 --out .hardkas/plan.json hardkas tx sign .hardkas/plan.json --account alice --out .hardkas/signed.json hardkas tx send .hardkas/signed.json --yes hardkas tx receipt --txid <TXID> --out .hardkas/receipt.json hardkas artifact verify .hardkas/artifacts --recursive --strict
Quickstart
pnpm install -g @hardkas/cli@alpha hardkas init hardkas node start hardkas accounts list hardkas tx send --from alice --to bob --amount 10 hardkas artifact verify .hardkas/artifacts --recursive hardkas --help
Local development from source
git clone https://github.com/KasLabDevs/HardKas.git cd HardKas pnpm install pnpm build hardkas init hardkas node start hardkas accounts list
Prerequisites
| Dependency | Required for | Install |
|---|---|---|
| Node.js >= 18 | All HardKAS commands | nodejs.org |
| pnpm | Package installation | npm install -g pnpm |
| Docker | hardkas node start, hardkas node status; stop/reset/logs if available in your installed CLI build | docker.com |
Solidity compiler (solc) | hardkas l2 contract deploy-plan when generating bytecode externally; hardkas l2 contract compile if available in your installed CLI build | pnpm add -g solc or included via Docker |
hardkas doctor checks for missing dependencies and reports what is needed.hardkas doctor
Docker node workflow
hardkas node start → container running (kaspad simnet) │ ├── hardkas node status → shows container health + RPC readiness ├── hardkas node logs → streams kaspad output ├── hardkas rpc health → checks RPC endpoints │ hardkas node stop → container stopped (data preserved) hardkas node reset → container removed + chain data deleted
| Command | What it does | Destructive? |
|---|---|---|
hardkas node start | Pulls kaspad image, starts container on simnet, maps RPC ports | No |
hardkas node status | Shows container state and connectivity | No |
hardkas node logs | Streams kaspad stdout from container; use only if present in your installed CLI build | No |
hardkas node stop | Stops container, preserves chain data in .hardkas/kaspad/; use only if present in your installed CLI build | No |
hardkas node reset | Removes container AND deletes chain data. Requires --yes or confirmation if present in your installed CLI build | YES |
node reset deletes all local chain data. This is intentional for development — it gives you a clean simnet. But it cannot be undone.CLI reference
Project commands
hardkas initInitializes a HardKAS project, local configuration and workspace.hardkas upStarts the expected local development stack when available.hardkas doctorRuns environment checks and reports common issues.hardkas config showPrints active configuration for debugging.Accounts
hardkas accounts listLists known development accounts.hardkas accounts real initInitializes local encrypted real-account support.hardkas accounts real generateGenerates a new development account.hardkas accounts real importImports an account into the local encrypted dev keystore.hardkas accounts real unlockUnlocks an account for signing operations.hardkas accounts real lockLocks the account again.hardkas accounts real change-passwordRotates the local keystore password.hardkas faucetLocal/simulated funding helper; must not be treated as mainnet funding.Node and transaction lifecycle
hardkas node startStarts or orchestrates a local Kaspa node environment.hardkas node statusShows node status and connectivity information.hardkas tx profileShows the current transaction profile/environment.hardkas tx planBuilds a transaction plan without signing.hardkas tx signSigns a transaction plan.hardkas tx sendBroadcasts a signed or constructed transaction.hardkas tx receiptFetches or stores a transaction receipt.hardkas tx verifyVerifies transaction structure, fees, semantic assumptions and artifact integrity.hardkas tx traceProduces trace information useful for debugging and replay.Artifacts, replay and snapshots
hardkas artifact verifyVerifies artifact integrity.hardkas artifact explainExplains an artifact in human-readable form.hardkas artifact lineageShows provenance and relationships between artifacts.hardkas replayReplays stored evidence to reproduce behavior.hardkas snapshotCaptures or restores deterministic local state snapshots.hardkas testRuns HardKAS-oriented test workflows.RPC diagnostics
hardkas rpc infoDisplays RPC endpoint information.hardkas rpc healthChecks node health.hardkas rpc doctorRuns deeper RPC diagnostics.hardkas rpc dagInspects DAG-related data exposed by RPC.hardkas rpc utxosQueries UTXOs.hardkas rpc mempoolInspects mempool state.DAG simulation and query layer
hardkas dag statusShows simulated DAG state.hardkas dag simulate-reorgRuns deterministic DAG reorg/conflict simulation.hardkas query artifacts listLists stored artifacts.hardkas query artifacts inspectInspects a stored artifact.hardkas query artifacts diffCompares artifacts.hardkas query lineage chainShows artifact lineage chains.hardkas query lineage transitionsShows lineage transitions.hardkas query lineage orphansReports orphaned lineage records.hardkas query replay listLists replay runs.hardkas query replay summarySummarizes replay outputs.hardkas query replay divergencesFinds replay divergences.hardkas query replay invariantsChecks replay invariants.hardkas query dag conflictsInspects DAG conflict records.hardkas query dag displacedShows displaced transactions or records.hardkas query dag historyShows DAG history in the local query store.hardkas query dag sink-pathInspects sink-path data.hardkas query dag anomaliesReports DAG-model anomalies.hardkas query eventsQueries stored events.hardkas query txQueries transaction records.Igra L2
hardkas l2 networksLists configured L2 networks.hardkas l2 profile showDisplays the active L2 profile.hardkas l2 profile validateValidates L2 configuration.hardkas l2 tx buildBuilds an L2 transaction.hardkas l2 tx signSigns an L2 transaction.hardkas l2 tx sendSends an L2 transaction.hardkas l2 tx receiptFetches L2 receipt.hardkas l2 tx statusChecks L2 transaction status.hardkas l2 contract deploy-planCreates a contract deployment plan before broadcast.hardkas l2 bridge statusShows bridge-related state.hardkas l2 bridge assumptionsExplains current bridge trust assumptions.hardkas l2 rpc healthChecks L2 RPC health.hardkas l2 balanceChecks L2 account balance.hardkas l2 nonceChecks L2 account nonce.Security model
| Area | Explanation |
|---|---|
| Developer keystore | Useful for repeatable signing workflows, not for high-value asset protection. |
| Mainnet guardrails | Mainnet signing should require explicit intent and visible separation from local or test profiles. |
| Threat model | Accidental mainnet signing, seed misuse, wrong environment, RPC spoofing, artifact tampering, replay confusion and bridge assumption misunderstanding. |
| Mitigations | Explicit network profiles, artifact verification, content hashes, lineage tracking, strict flags, human-readable artifact explainers, doctor commands and security documentation. |
| Command | Local / Simnet | Testnet | Mainnet |
|---|---|---|---|
hardkas faucet | Allowed | Refused / environment-specific | Refused |
hardkas accounts fund | Allowed | Refused / environment-specific | Refused |
hardkas tx sign | Allowed | Allowed | Explicit mainnet confirmation required |
hardkas tx send | Allowed | Allowed | Explicit confirmation required |
Networks and environments
| Environment | Use | Notes |
|---|---|---|
| Local / simulated | Fast deterministic testing and examples. | Not consensus-equivalent. |
| Testnet | Real network behavior without high-value funds. | Useful for RPC and integration checks. |
| Mainnet | Real assets. | Requires strict care. Avoid dev keystore for high-value keys. |
| Igra L2 | Separate account-based EVM-compatible execution environment. | Bridge assumptions must be explicit. |
Kaspa L1 vs Igra L2
| Layer | Responsibility | Execution model | HardKAS role |
|---|---|---|---|
| Kaspa L1 | UTXO BlockDAG, sequencing, data availability, anchoring | No EVM execution | Plans, verifies and interacts with L1 transactions and RPC |
| Igra L2 | EVM-compatible execution, account state, contracts | Account-based L2 execution | Preflights, RPC checks, L2 tx tooling and profile validation |
| Bridge | Cross-layer movement | Phase-dependent trust assumptions | Exposes assumptions rather than hiding them |
Bridge realities
| Phase | Trust model | Notes |
|---|---|---|
| Pre-ZK | Not trustless. May rely on multisig, MPC or committee assumptions. | Useful for early workflows but must be labeled. |
| MPC phase | Stronger operational model than simple multisig. | Still not equivalent to ZK trustless exits. |
| ZK phase | Target model for trust-minimized exits. | Validity depends on proof verification and data availability. |
hardkas l2 bridge assumptions hardkas l2 bridge status
RPC diagnostics
| Diagnostic | Purpose |
|---|---|
| Endpoint health | Check whether the selected RPC endpoint is reachable and responding. |
| DAG visibility | Inspect DAG-related data exposed by RPC. |
| UTXO lookup | Query UTXOs for a given address. |
| Mempool inspection | Inspect pending transaction state. |
| Doctor checks | Run deeper environment diagnostics. |
hardkas rpc info hardkas rpc health hardkas rpc doctor hardkas rpc dag hardkas rpc utxos --address <ADDRESS> hardkas rpc mempool
DAG simulation and reorg testing
hardkas dag status hardkas dag simulate-reorg hardkas query dag conflicts hardkas query dag displaced hardkas query dag history hardkas query dag sink-path hardkas query dag anomalies
Examples
| Example | Purpose |
|---|---|
| Hello Kaspa | Minimal project initialization and first CLI commands. |
| Basic Transfer | Plan, sign, send and verify a transaction. |
| Localnet Demo | Start a local environment and execute deterministic flows. |
| Trace & Replay | Produce traces and replay transaction behavior. |
| Snapshot Restore | Capture and restore deterministic local state. |
| RPC Node Health | Diagnose endpoint and node health. |
| Failure Debugging | Use artifacts and traces to explain unexpected behavior. |
| Igra L2 Readonly | Query L2 state without sending transactions. |
| Bridge Assumptions | Inspect current bridge trust assumptions. |
| CI Workflow | Run repeatable development checks in CI. |
| DAG Reorg Simulation | Test behavior under DAG displacement and conflict scenarios. |
Comparison
| Feature | HardKAS | Generic script | Wallet-only workflow |
|---|---|---|---|
| Deterministic planning | Yes | Manual | No |
| Artifact verification | Yes | Usually no | No |
| Replay/debugging | Yes | Ad hoc | No |
| CLI workflow | Yes | Custom | No |
| RPC diagnostics | Yes | Manual | No |
| DAG simulation | Light-model | No | No |
| L2 preflights | Preview | Manual | No |
| Security guardrails | Explicit | Developer-defined | Wallet-defined |
| CI friendliness | Designed for it | Depends | No |
| Human-readable lineage | Yes | Usually no | No |
If you come from Hardhat or Foundry
| Hardhat concept | HardKAS analogue |
|---|---|
| Network config | HardKAS profiles |
| Deployment scripts | Transaction/deploy plans |
| Transaction receipt | HardKAS receipt artifact |
| Trace/debug | HardKAS trace/replay |
| Local network | HardKAS localnet/light simulation |
| Tasks | CLI command groups |
| Artifacts | HardKAS artifacts and lineage |
Status and roadmap
| Status | Items |
|---|---|
| Stable | Deterministic artifacts, plans, signed transactions, receipts, snapshot hashing, semantic verification and encrypted dev keystore. |
| Preview | Igra L2 integration, contract deployment preflights, bridge modeling and advanced lineage extensions. |
| Research | DAG light-model, anomalies engine and deeper DAG introspection. |
| Planned | Multi-node localnet, visual trace explorer, Post-Toccata SilverScript and covenant tooling, vProg tooling and fuzzing infrastructure. |
FAQ
Is HardKAS a wallet?
No. It is developer infrastructure. It may include local signing tools, but it is not production custody software.
Does HardKAS implement Kaspa consensus?
No. It may simulate DAG-related behavior for development, but it does not replace official Kaspa consensus software.
Does Kaspa L1 run EVM?
No. EVM execution belongs to separate L2 environments such as Igra.
Can I use HardKAS on mainnet?
Only with extreme care and explicit guardrails. It is not intended for high-value custody.
Why artifacts?
Artifacts make transaction workflows inspectable, reproducible, verifiable and replayable.
What does deterministic mean here?
Given the same inputs and local state, the workflow should produce reproducible outputs such as plans, receipts, traces and replay results.
Is the bridge trustless?
Only a ZK-based exit model can be described as trustless. Pre-ZK bridge workflows must disclose their multisig/MPC assumptions.
Is Igra part of Kaspa L1?
No. Igra is a separate L2 execution environment using Kaspa as a base/sequencing/anchoring layer depending on architecture.