HardKAS

HardKAS is a Kaspa-native developer environment for deterministic transaction planning, artifact verification, replay/debugging, RPC diagnostics and early Igra L2 workflows. It gives developers a Hardhat-like workflow without claiming that Kaspa L1 executes EVM or that the simulator implements full consensus.

Developer Preview 0.2.2-alpha.1 Kaspa Native Local-first Artifact-first Igra L2 Preview
Developer Preview. APIs, command interfaces and artifact formats may evolve. Use HardKAS as developer infrastructure, not production custody software.
install
pnpm install -g @hardkas/cli@alpha
hardkas init
hardkas node start
hardkas accounts list
hardkas tx send --from alice --to bob --amount 10

Positioning

TermConcrete meaning in HardKAS
TxPlanUnsigned transaction blueprint: selected inputs, intended outputs, fee policy, network, mode and lineage.
SignedTxTxPlan-derived artifact containing signing metadata and a serialized transaction ready for broadcast.
ReceiptObserved result of a send operation: txid, status, block/DAG observation, RPC endpoint metadata and parent artifact.
LineageProvenance block that links artifacts through artifactId, lineageId, rootArtifactId, parentArtifactId and sequence.

What is HardKAS?

SubsystemWhat it ownsWhat it does not own
ArtifactsCanonical serialization, content hashes, schema IDs, lineage metadata and artifact verification.Consensus finality or wallet custody.
SimulatorDeveloper light-model for DAG conflicts, selected-path movement and transaction displacement.Full GHOSTDAG, DAGKnight, mining, P2P propagation or security proofs.
Query layerEvent log, artifact scan, SQLite index and domain adapters for artifacts, lineage, replay, DAG and events.A replacement for a block explorer or archive node.
L2 toolingIgra profiles, L2 RPC health, tx build/sign/send, receipts and bridge assumption checks.Kaspa L1 EVM execution or trustless pre-ZK exits.
Architectural honesty. HardKAS is not production custody software, not a replacement for kaspad, not a full node implementation, not a consensus implementation, not full GHOSTDAG/DAGKnight parity, not EVM on Kaspa L1, and not a trustless bridge system in pre-ZK mode.

How canonical hashing works

canonical hash pipeline
Artifact JSON


Remove 13 fields ──→ [contentHash, artifactId, planId, lineage, createdAt, rpcUrl, indexedAt,
                       file_path, file_mtime_ms, hardkasVersion, version, parentArtifactId, signedId]


Sort keys recursively


BigInt  string


SHA-256


contentHash (64-char hex)
Excluded fieldWhy it is excluded from the semantic hash
contentHashSelf-reference. The hash cannot include the field that stores the hash.
artifactIdArtifact identity often derives from the hash, so including it would create a circular dependency.
planIdPlanning identifier can be generated outside the semantic transaction body.
lineageLineage changes as the artifact enters a chain; the semantic payload should remain hashable without provenance metadata.
createdAtWall-clock time is nondeterministic across machines and CI runs.
rpcUrlEndpoint choice is transport metadata, not transaction semantics.
indexedAtIndexer timestamp depends on when the query store observed the artifact.
file_pathFilesystem location is local to a developer machine or CI workspace.
file_mtime_msFilesystem modification time changes when files are copied, checked out or touched.
hardkasVersionTooling version is useful metadata but should not invalidate unchanged semantic content.
versionSchema version may be tracked separately from the semantic payload hash.
parentArtifactIdParent pointer is provenance metadata; the child payload can be checked independently.
signedIdSigning-stage identity can be derived or assigned outside the unsigned transaction semantics.
same semantic hash despite different createdAt
{
  "schema": "hardkas.txPlan.v2",
  "createdAt": "2026-05-14T10:00:00Z",
  "from": "kaspatest:qqalice",
  "to": "kaspatest:qqbob",
  "amountSompi": "1000000000",
  "networkId": "testnet",
  "mode": "simulated"
}

// Same object, different createdAt. Hash input is identical:
{"amountSompi":"1000000000","from":"kaspatest:qqalice","mode":"simulated","networkId":"testnet","schema":"hardkas.txPlan.v2","to":"kaspatest:qqbob"}

Artifact format

FieldRole
schemaArtifact type identifier, for example hardkas.txPlan.v2.
networkIdPrevents mixing testnet, mainnet, simnet or L2 artifacts.
modeSeparates simulated evidence from real network observations.
contentHashIntegrity seal over the canonical semantic payload.
artifactIdIdentity used by lineage; expected to match contentHash when both are present.
lineageProvenance block connecting the artifact to a workflow and its parent.
tx-plan artifact
{
  "schema": "hardkas.txPlan.v2",              // artifact type
  "hardkasVersion": "0.2.2-alpha.1",          // producer version, excluded from hash
  "version": "2.0.0",                         // schema version, excluded from hash
  "networkId": "testnet",                     // target network
  "mode": "simulated",                        // simulated or real
  "contentHash": "aaaaaaaa...aaaaaaaa",       // SHA-256 canonical payload seal
  "lineage": {
    "artifactId": "aaaaaaaa...aaaaaaaa",       // normally matches contentHash
    "lineageId": "111111...111111",           // stable workflow id
    "rootArtifactId": "aaaaaaaa...aaaaaaaa",   // first artifact in flow
    "parentArtifactId": null,
    "sequence": 0
  },
  "plan": {
    "from": "kaspatest:qqalice",
    "to": "kaspatest:qqbob",
    "amountSompi": "1000000000",
    "feeSompi": "10000",
    "inputs": [{ "outpoint": "0f00...abcd:0", "valueSompi": "1000010000" }],
    "outputs": [{ "address": "kaspatest:qqbob", "valueSompi": "1000000000" }]
  }
}
tx-receipt artifact
{
  "schema": "hardkas.txReceipt.v2",
  "networkId": "testnet",
  "mode": "simulated",
  "contentHash": "cccccccc...cccccccc",
  "lineage": {
    "artifactId": "cccccccc...cccccccc",
    "lineageId": "111111...111111",
    "rootArtifactId": "aaaaaaaa...aaaaaaaa",
    "parentArtifactId": "bbbbbbbb...bbbbbbbb",
    "sequence": 2
  },
  "txid": "4d2c...9999",
  "status": "accepted",
  "observedAtDaaScore": 924211,
  "rpc": { "endpointLabel": "local-simnet", "rpcUrl": "http://127.0.0.1:16110" }
}

Lineage chain

artifact lineage diagram
┌─────────────┐  contentHash: a1b2c3...
   TxPlan      planId: plan_001
   seq: 0      parentArtifactId: none
└──────┬──────┘
       
       
┌─────────────┐  contentHash: c3d4e5...
  SignedTx     signedId: signed_001
   seq: 1      parentArtifactId: a1b2c3...
└──────┬──────┘
       
       
┌─────────────┐  contentHash: e5f6a7...
   Receipt     txId: 4d2c...9999
   seq: 2      parentArtifactId: c3d4e5...
└─────────────┘
verifyLineage checkFailure detected
Structural integrityMissing lineage, missing required IDs, or non-64-char hex identifiers.
Identity consistencylineage.artifactId differs from artifact.contentHash.
Chain continuityChild parentArtifactId does not equal parent artifactId.
Lineage stabilityChild and parent use different lineageId or rootArtifactId.
Sequence monotonicityChild sequence is not greater than parent sequence.
Cross-network contaminationChild and parent have different networkId.
Mode contaminationChild and parent mix simulated and real modes.

DAG conflict resolution

This is a light-model simulation, not GHOSTDAG. It does not model proof-of-work, hash-rate, P2P latency, k-parent selection or full consensus security.
conflict scenario
Block A (daaScore: 1)          Block B (daaScore: 1)
└─ tx-1: spends utxo-X        └─ tx-2: spends utxo-X
                                       
                                       
    ACCEPTED                       DISPLACED
    first in deterministic          conflict: utxo-X
    selected-path order             already spent by tx-1
PriorityRuleEffect
1Sink ancestryBlocks in the selected path to the current sink have absolute priority.
2Deterministic orderIf ancestry does not decide, tie-break by approximate DAA score, then block ID lexicographically.
3Transaction IDIf blocks still tie, use txid lexicographical order.

Query engine

DomainOperationsExample
artifactslist, inspect, diff, verifyhardkas query artifacts list --mode simulated
lineagechain, transitions, orphanshardkas query lineage chain --hash a1b2...
replaylist, summary, divergenceshardkas query replay divergences
dagconflicts, displaced, anomalieshardkas query dag conflicts
eventslist, summaryhardkas query events list --kind tx.plan.created
Storage layerRole
.hardkas/events.jsonlAppend-only event stream for workflow, RPC, replay and DAG events.
.hardkas/artifacts/*.jsonCanonical artifact files used as source of truth.
SQLite query storeIndexed read model over events and artifacts.
Filesystem fallbackUsed when SQLite is missing, stale or intentionally disabled.
query examples
hardkas query artifacts list --json
[
  {
    "artifactId": "aaaaaaaa...aaaaaaaa",
    "schema": "hardkas.txPlan.v2",
    "networkId": "testnet",
    "mode": "simulated",
    "path": ".hardkas/artifacts/tx-plan-001.json"
  }
]

hardkas query lineage chain cccccccc...cccccccc --json
{
  "rootArtifactId": "aaaaaaaa...aaaaaaaa",
  "chain": [
    { "schema": "hardkas.txPlan.v2", "sequence": 0 },
    { "schema": "hardkas.signedTx.v2", "sequence": 1 },
    { "schema": "hardkas.txReceipt.v2", "sequence": 2 }
  ],
  "ok": true
}

hardkas query dag displaced --json
[
  {
    "txid": "tx-alpha",
    "reason": "input-spent-by-selected-path",
    "conflictingTxid": "tx-beta",
    "outpoint": "7a01...ee:0"
  }
]

Architecture overview

Kaspa L1 / BlockDAG

Consensus, sequencing, finality
  • UTXO-based BlockDAG
  • Sequencing
  • Data availability
  • State commitment anchoring
  • Finality via Kaspa consensus
  • No EVM execution
  • No account-based smart contract state on L1

HardKAS

Developer tooling layer
  • CLI, SDK and core primitives
  • Deterministic simulation
  • Artifact lifecycle
  • Transaction planner
  • Replay engine
  • Query store
  • RPC diagnostics
  • Node orchestrator and localnet helpers

Igra L2

EVM execution environment
  • EVM-compatible execution
  • Account-based state
  • Contract deployment preflights
  • L2 RPC checks
  • L2 balance and nonce tooling
  • Bridge assumption awareness
  • ZK exit as target phase, not immediate guarantee

Published packages

PackagePurpose
@hardkas/cliCommand-line interface for project setup, accounts, transactions, artifacts, RPC diagnostics, DAG simulation and L2 workflows.
@hardkas/sdkDeveloper SDK for building programmatic workflows around Kaspa transactions, simulation and artifact inspection.
@hardkas/coreCore primitives, shared types and deterministic data structures used across the CLI and SDK.

Repository map

Folder / packagePurpose
.github/workflowsCI workflows and repeatable repository checks.
docsArchitecture notes, security model, artifact model, query layer, simulation model and anti-patterns.
examplesRunnable examples for transfers, localnet, trace/replay, RPC health, Igra read-only flows and CI demos.
packages/cliCLI command surface and command groups.
packages/sdkProgrammatic developer SDK.
packages/coreShared types, primitives and deterministic core utilities.
accountsDevelopment account and encrypted dev-keystore flows.
artifactsStructured outputs for verification, replay, lineage tracking and debugging.
kaspa-rpcKaspa RPC integration and diagnostics.
l2Igra L2 profiles, transaction helpers, RPC checks and bridge assumption tooling.
localnetLocal development network helpers.
node-orchestratorNode start/status orchestration and environment coordination.
query-storeInspection tools for artifacts, lineage, replay results, DAG events and transactions.
simulatorDeterministic light-model simulation utilities.
testingTest helpers and repeatable workflows.
tx-builderTransaction construction and planning utilities.

Artifact lifecycle

Plan Unsigned tx blueprint
Sign Signed payload
Broadcast Network submission
Receipt Observed result
Trace Debug evidence
Verify Hash + lineage checks ↩ back to Plan
artifact workflow
hardkas tx plan --from alice --to bob --amount 10 --out .hardkas/plan.json
hardkas tx sign .hardkas/plan.json --account alice --out .hardkas/signed.json
hardkas tx send .hardkas/signed.json --yes
hardkas tx receipt --txid <TXID> --out .hardkas/receipt.json
hardkas artifact verify .hardkas/artifacts --recursive --strict

Quickstart

quickstart
pnpm install -g @hardkas/cli@alpha
hardkas init
hardkas node start
hardkas accounts list
hardkas tx send --from alice --to bob --amount 10
hardkas artifact verify .hardkas/artifacts --recursive
hardkas --help

Local development from source

monorepo
git clone https://github.com/KasLabDevs/HardKas.git
cd HardKas
pnpm install
pnpm build
hardkas init
hardkas node start
hardkas accounts list

Prerequisites

DependencyRequired forInstall
Node.js >= 18All HardKAS commandsnodejs.org
pnpmPackage installationnpm install -g pnpm
Dockerhardkas node start, hardkas node status; stop/reset/logs if available in your installed CLI builddocker.com
Solidity compiler (solc)hardkas l2 contract deploy-plan when generating bytecode externally; hardkas l2 contract compile if available in your installed CLI buildpnpm add -g solc or included via Docker
hardkas doctor checks for missing dependencies and reports what is needed.
dependency check
hardkas doctor

Docker node workflow

node lifecycle
hardkas node start  container running (kaspad simnet)
       
       ├── hardkas node status  shows container health + RPC readiness
       ├── hardkas node logs    streams kaspad output
       ├── hardkas rpc health   checks RPC endpoints
       
hardkas node stop   container stopped (data preserved)
hardkas node reset  container removed + chain data deleted
CommandWhat it doesDestructive?
hardkas node startPulls kaspad image, starts container on simnet, maps RPC portsNo
hardkas node statusShows container state and connectivityNo
hardkas node logsStreams kaspad stdout from container; use only if present in your installed CLI buildNo
hardkas node stopStops container, preserves chain data in .hardkas/kaspad/; use only if present in your installed CLI buildNo
hardkas node resetRemoves container AND deletes chain data. Requires --yes or confirmation if present in your installed CLI buildYES
node reset deletes all local chain data. This is intentional for development — it gives you a clean simnet. But it cannot be undone.

CLI reference

Project commands
hardkas initInitializes a HardKAS project, local configuration and workspace.
hardkas upStarts the expected local development stack when available.
hardkas doctorRuns environment checks and reports common issues.
hardkas config showPrints active configuration for debugging.
Accounts
hardkas accounts listLists known development accounts.
hardkas accounts real initInitializes local encrypted real-account support.
hardkas accounts real generateGenerates a new development account.
hardkas accounts real importImports an account into the local encrypted dev keystore.
hardkas accounts real unlockUnlocks an account for signing operations.
hardkas accounts real lockLocks the account again.
hardkas accounts real change-passwordRotates the local keystore password.
hardkas faucetLocal/simulated funding helper; must not be treated as mainnet funding.
Key handling. Never import a primary high-value mainnet seed phrase into developer tooling.
Node and transaction lifecycle
hardkas node startStarts or orchestrates a local Kaspa node environment.
hardkas node statusShows node status and connectivity information.
hardkas tx profileShows the current transaction profile/environment.
hardkas tx planBuilds a transaction plan without signing.
hardkas tx signSigns a transaction plan.
hardkas tx sendBroadcasts a signed or constructed transaction.
hardkas tx receiptFetches or stores a transaction receipt.
hardkas tx verifyVerifies transaction structure, fees, semantic assumptions and artifact integrity.
hardkas tx traceProduces trace information useful for debugging and replay.
Artifacts, replay and snapshots
hardkas artifact verifyVerifies artifact integrity.
hardkas artifact explainExplains an artifact in human-readable form.
hardkas artifact lineageShows provenance and relationships between artifacts.
hardkas replayReplays stored evidence to reproduce behavior.
hardkas snapshotCaptures or restores deterministic local state snapshots.
hardkas testRuns HardKAS-oriented test workflows.
RPC diagnostics
hardkas rpc infoDisplays RPC endpoint information.
hardkas rpc healthChecks node health.
hardkas rpc doctorRuns deeper RPC diagnostics.
hardkas rpc dagInspects DAG-related data exposed by RPC.
hardkas rpc utxosQueries UTXOs.
hardkas rpc mempoolInspects mempool state.
DAG simulation and query layer
hardkas dag statusShows simulated DAG state.
hardkas dag simulate-reorgRuns deterministic DAG reorg/conflict simulation.
hardkas query artifacts listLists stored artifacts.
hardkas query artifacts inspectInspects a stored artifact.
hardkas query artifacts diffCompares artifacts.
hardkas query lineage chainShows artifact lineage chains.
hardkas query lineage transitionsShows lineage transitions.
hardkas query lineage orphansReports orphaned lineage records.
hardkas query replay listLists replay runs.
hardkas query replay summarySummarizes replay outputs.
hardkas query replay divergencesFinds replay divergences.
hardkas query replay invariantsChecks replay invariants.
hardkas query dag conflictsInspects DAG conflict records.
hardkas query dag displacedShows displaced transactions or records.
hardkas query dag historyShows DAG history in the local query store.
hardkas query dag sink-pathInspects sink-path data.
hardkas query dag anomaliesReports DAG-model anomalies.
hardkas query eventsQueries stored events.
hardkas query txQueries transaction records.
Igra L2
hardkas l2 networksLists configured L2 networks.
hardkas l2 profile showDisplays the active L2 profile.
hardkas l2 profile validateValidates L2 configuration.
hardkas l2 tx buildBuilds an L2 transaction.
hardkas l2 tx signSigns an L2 transaction.
hardkas l2 tx sendSends an L2 transaction.
hardkas l2 tx receiptFetches L2 receipt.
hardkas l2 tx statusChecks L2 transaction status.
hardkas l2 contract deploy-planCreates a contract deployment plan before broadcast.
hardkas l2 bridge statusShows bridge-related state.
hardkas l2 bridge assumptionsExplains current bridge trust assumptions.
hardkas l2 rpc healthChecks L2 RPC health.
hardkas l2 balanceChecks L2 account balance.
hardkas l2 nonceChecks L2 account nonce.

Security model

Do not use HardKAS as your primary wallet. Do not store high-value production keys in a developer keystore. For production custody, use dedicated hardware-backed wallets and operational security processes.
AreaExplanation
Developer keystoreUseful for repeatable signing workflows, not for high-value asset protection.
Mainnet guardrailsMainnet signing should require explicit intent and visible separation from local or test profiles.
Threat modelAccidental mainnet signing, seed misuse, wrong environment, RPC spoofing, artifact tampering, replay confusion and bridge assumption misunderstanding.
MitigationsExplicit network profiles, artifact verification, content hashes, lineage tracking, strict flags, human-readable artifact explainers, doctor commands and security documentation.
CommandLocal / SimnetTestnetMainnet
hardkas faucetAllowedRefused / environment-specificRefused
hardkas accounts fundAllowedRefused / environment-specificRefused
hardkas tx signAllowedAllowedExplicit mainnet confirmation required
hardkas tx sendAllowedAllowedExplicit confirmation required

Networks and environments

EnvironmentUseNotes
Local / simulatedFast deterministic testing and examples.Not consensus-equivalent.
TestnetReal network behavior without high-value funds.Useful for RPC and integration checks.
MainnetReal assets.Requires strict care. Avoid dev keystore for high-value keys.
Igra L2Separate account-based EVM-compatible execution environment.Bridge assumptions must be explicit.

Kaspa L1 vs Igra L2

LayerResponsibilityExecution modelHardKAS role
Kaspa L1UTXO BlockDAG, sequencing, data availability, anchoringNo EVM executionPlans, verifies and interacts with L1 transactions and RPC
Igra L2EVM-compatible execution, account state, contractsAccount-based L2 executionPreflights, RPC checks, L2 tx tooling and profile validation
BridgeCross-layer movementPhase-dependent trust assumptionsExposes assumptions rather than hiding them

Bridge realities

PhaseTrust modelNotes
Pre-ZKNot trustless. May rely on multisig, MPC or committee assumptions.Useful for early workflows but must be labeled.
MPC phaseStronger operational model than simple multisig.Still not equivalent to ZK trustless exits.
ZK phaseTarget model for trust-minimized exits.Validity depends on proof verification and data availability.
bridge checks
hardkas l2 bridge assumptions
hardkas l2 bridge status

RPC diagnostics

DiagnosticPurpose
Endpoint healthCheck whether the selected RPC endpoint is reachable and responding.
DAG visibilityInspect DAG-related data exposed by RPC.
UTXO lookupQuery UTXOs for a given address.
Mempool inspectionInspect pending transaction state.
Doctor checksRun deeper environment diagnostics.
rpc diagnostics
hardkas rpc info
hardkas rpc health
hardkas rpc doctor
hardkas rpc dag
hardkas rpc utxos --address <ADDRESS>
hardkas rpc mempool

DAG simulation and reorg testing

Light-model only. DAG simulation does not claim full GHOSTDAG/DAGKnight parity or replace official consensus logic.
dag tools
hardkas dag status
hardkas dag simulate-reorg
hardkas query dag conflicts
hardkas query dag displaced
hardkas query dag history
hardkas query dag sink-path
hardkas query dag anomalies

Examples

ExamplePurpose
Hello KaspaMinimal project initialization and first CLI commands.
Basic TransferPlan, sign, send and verify a transaction.
Localnet DemoStart a local environment and execute deterministic flows.
Trace & ReplayProduce traces and replay transaction behavior.
Snapshot RestoreCapture and restore deterministic local state.
RPC Node HealthDiagnose endpoint and node health.
Failure DebuggingUse artifacts and traces to explain unexpected behavior.
Igra L2 ReadonlyQuery L2 state without sending transactions.
Bridge AssumptionsInspect current bridge trust assumptions.
CI WorkflowRun repeatable development checks in CI.
DAG Reorg SimulationTest behavior under DAG displacement and conflict scenarios.

Comparison

FeatureHardKASGeneric scriptWallet-only workflow
Deterministic planningYesManualNo
Artifact verificationYesUsually noNo
Replay/debuggingYesAd hocNo
CLI workflowYesCustomNo
RPC diagnosticsYesManualNo
DAG simulationLight-modelNoNo
L2 preflightsPreviewManualNo
Security guardrailsExplicitDeveloper-definedWallet-defined
CI friendlinessDesigned for itDependsNo
Human-readable lineageYesUsually noNo

If you come from Hardhat or Foundry

Hardhat conceptHardKAS analogue
Network configHardKAS profiles
Deployment scriptsTransaction/deploy plans
Transaction receiptHardKAS receipt artifact
Trace/debugHardKAS trace/replay
Local networkHardKAS localnet/light simulation
TasksCLI command groups
ArtifactsHardKAS artifacts and lineage

Status and roadmap

StatusItems
StableDeterministic artifacts, plans, signed transactions, receipts, snapshot hashing, semantic verification and encrypted dev keystore.
PreviewIgra L2 integration, contract deployment preflights, bridge modeling and advanced lineage extensions.
ResearchDAG light-model, anomalies engine and deeper DAG introspection.
PlannedMulti-node localnet, visual trace explorer, Post-Toccata SilverScript and covenant tooling, vProg tooling and fuzzing infrastructure.

FAQ

Is HardKAS a wallet?

No. It is developer infrastructure. It may include local signing tools, but it is not production custody software.

Does HardKAS implement Kaspa consensus?

No. It may simulate DAG-related behavior for development, but it does not replace official Kaspa consensus software.

Does Kaspa L1 run EVM?

No. EVM execution belongs to separate L2 environments such as Igra.

Can I use HardKAS on mainnet?

Only with extreme care and explicit guardrails. It is not intended for high-value custody.

Why artifacts?

Artifacts make transaction workflows inspectable, reproducible, verifiable and replayable.

What does deterministic mean here?

Given the same inputs and local state, the workflow should produce reproducible outputs such as plans, receipts, traces and replay results.

Is the bridge trustless?

Only a ZK-based exit model can be described as trustless. Pre-ZK bridge workflows must disclose their multisig/MPC assumptions.

Is Igra part of Kaspa L1?

No. Igra is a separate L2 execution environment using Kaspa as a base/sequencing/anchoring layer depending on architecture.

HardKAS · Docs · CLI · SDK · Examples · Security · GitHub · MIT License
HardKAS is developer infrastructure in active alpha. It is provided without warranty and is not production custody software.