top of page
COMPLIANCE & TRUST

The governance layer
auditors can stand behind.

nxtlinq is SOC II Type 2 certified and SOC 3 publicly attested. Every AI agent action generates cryptographic, blockchain-anchored evidence — making compliance not a checkbox, but an architecture.

AICPA SOC
AICPA SOC II Type 2

Certified

SOC II

Type 2 — 4 Trust Service Criteria

SOC 3

Publicly Available Report

9

Issued & Pending Patents

100%

Immutable Blockchain Audit Trail

Air

Gap Sovereign Deployment Ready

15

Day POC to Full Governance

SOC REPORTS

Independent attestation.
No ambiguity.

nxtlinq System and Organization Controls (SOC) Reports are independent third-party examination reports that demonstrate how nxtlinq achieves key compliance controls and objectives. Two reports are available — one for customers requiring detailed control evidence, one publicly available to all.

SOC 2 vs SOC 3 — At a Glance

Full comparison of both reports, standards, scope, and audience.

Trust Service Criteria — What's Covered

Both SOC reports attest to all four AICPA Trust Service Criteria.

🔒
Security

System is protected against unauthorized access — physical and logical. Zero-trust identity controls, MFA enforcement, and blockchain-anchored access logs.

🔏
Confidentiality

Information designated as confidential is protected. Encryption at rest and in transit, role-scoped AIT access, and no operational data on-chain.

Availability

System is available for operation and use as committed. SLA commitments, redundancy architecture, and monitored uptime.

🛡️
Privacy

Personal information is collected, used, retained, and disclosed in conformity with commitments. HIT-anchored identity isolation prevents data commingling.

AICPA SOC
AICPA SOC II Type 2 Certified

Security · Availability · Confidentiality · Privacy
Issued under SSAE No. 18

Publicly available — no NDA required

Available to customers & prospects with a business need

Also Aligned With

AUDIT TRAIL ARCHITECTURE

Evidence that survives
any audit.

Every AI agent action in nxtlinq generates a cryptographic record — scope, authorization, execution, and outcome — anchored to a blockchain ledger. This isn't logging. It's proof.

🪙
HIT / AIT Cryptographic Identity

Every human principal holds a Human Identity Token (HIT). Every AI agent is issued an AI Identity Token (AIT) scoped, time-bound, and cryptographically chained to its HIT. No action can occur without a verifiable identity at both ends of the delegation chain.

PATENTED ARCHITECTURE
📐
Scope Enforcement Before Execution

The ASTP (Authenticate · Scope · Trust · Prove) framework enforces governance before any agent acts — not after. Each action is evaluated against the AIT's policy envelope before execution. Out-of-scope requests are blocked and logged.

ASTP FRAMEWORK
⛓️
Blockchain-Anchored Audit Ledger

Every governance event — authorization, scope enforcement, policy evaluation, execution outcome — is hashed and committed to an immutable blockchain ledger. No operational data is stored on-chain. Only cryptographic commitments. Tamper-evident by design.

IMMUTABLE · TAMPER-EVIDENT
🧵
Full Delegation Chain Lineage

Multi-agent workflows maintain unbroken delegation chains. Sub-agent spawning, tool invocations, and cross-model handoffs each inherit and extend the parent AIT context — so every action traces back to a named, verified human principal.

END-TO-END ATTRIBUTION
🔁
Deterministic Action Replay

Any authorized action can be deterministically reconstructed for incident investigation or regulatory review. Full policy state, delegation context, and execution pathway are preserved — enabling precise forensic replay without relying on fragile logs.

FORENSIC REPLAY
📋
Policy-Bound Trusted Data Packets

nxtGPT and nxtNLP generate Trusted Data Packets (TDPs) — structured, versioned data objects carrying provenance, authorization context, and cryptographic lineage. TDPs are the auditable unit of AI-generated output throughout the enterprise.

TDP ARCHITECTURE
THE EVIDENCE STORY

What auditors ask for.
What nxtlinq delivers.

Regulated sector auditors — whether SOC, HIPAA, SOX, or FedRAMP — ask the same fundamental questions about AI deployments. Here's exactly how nxtlinq answers each one.

01
"Who authorized this AI action?"

Every agent action is traceable to a named human principal via an unbroken HIT → AIT delegation chain. Authorization scope, time-bounds, and permissions are cryptographically recorded at the moment of issuance.

  Human Identity Token (HIT) anchors all delegation
  AIT carries full scope definition and expiry
  Blockchain commit records the authorization event
03
"Can you prove nothing was tampered with?"

Blockchain anchoring provides cryptographic tamper evidence. Any modification to an audit record would break the hash chain — making unauthorized alterations immediately detectable without requiring trust in nxtlinq's own infrastructure.

  Hash-chain integrity verifiable by third parties
  No operational data stored on-chain — only proofs
  Independently verifiable without vendor access
02
"What did the AI actually do?"

Every action executed by an AI agent — reads, writes, API calls, tool invocations — is logged to the blockchain audit ledger at execution time. The record includes the agent's AIT, the action payload hash, and the policy evaluation result.

  Execution event hashed and committed on-chain
  Policy state preserved at time of action
  Out-of-scope attempts blocked and logged separately
04
"Can you reconstruct what happened?"

Deterministic action replay allows any authorized event to be precisely reconstructed — including the full policy state, delegation context, and decision pathway at the time of execution. No dependency on fragile log infrastructure.

  Full forensic reconstruction from stored proofs
  Policy and context captured at execution — not inferred
  Supports incident investigation and regulatory review

📜

The result: AI compliance that satisfies auditors — not just security teams.

Okta, Entra ID, and Ping were designed for human credentials. They authenticate who logs in — but have zero visibility into what an AI agent does after authentication. The delegation chain from human to agent is completely invisible to them.

SOVEREIGN DEPLOYMENT

Full air gap.
Zero compromise.

For defense, intelligence, and the most regulated sectors — where data sovereignty is non-negotiable — nxtlinq supports complete air-gapped deployment. No cloud dependency. No external API calls. No data leaving your perimeter.

🏗️

Complete On-Premises Architecture

All three platform products — nxtID, nxtNLP, nxtGPT — deploy entirely within your infrastructure. The HIT/AIT governance fabric, blockchain audit ledger, and policy enforcement engine run inside your perimeter.

🤖

Private & Air-Gapped Model Support

nxtlinq is fully model-agnostic. Governance wraps any model — self-hosted Llama, Mistral, private fine-tuned models, or classified inference environments. The identity delegation chain and audit trail are preserved regardless of where inference runs.

⛓️

Internal Blockchain Ledger

The immutable audit ledger operates on a private, internal blockchain — no public chain dependency, no external validators. Cryptographic tamper evidence and chain integrity are maintained entirely within your infrastructure boundary.

📋

Compliance Evidence Without Cloud Exposure

SOC II audit evidence, HIPAA access records, and SOX-ready audit trails are all generated and retained internally. Regulators receive the evidence package — no cloud provider is in the chain of custody.

YOUR INFRASTRUCTURE PERIMETER
AIR-GAPPED ZONE
🏗️
HIT / AIT Identity Fabric
NXTID
⛓️
Private Blockchain Audit Ledger
INTERNAL
🧠
nxtNLP · nxtGPT (on-prem)
NXTLINQ
🤖
Private / Air-Gapped Models
YOUR MODELS
🏛️
Enterprise IAM (Okta / Entra / Ping)
LAYER 1

⛔ No external network calls · No cloud dependency
All data stays inside your perimeter

PATENT PORTFOLIO

Defensible IP.
Not just documentation.

nxtlinq's compliance architecture is backed by 6 issued patents and 3 pending — covering the HIT/AIT identity framework, blockchain-based AI agent lifecycle management, and dynamic data security. This is defensible, auditable architecture by design.

6

Issued Patents

3

Pending Patents

9

Total Portfolio

US 11,507,754
Visualization Tool for NLP-based Clustering and Analysis of Unstructured Comments & Data
ISSUED

Analytical foundation of nxtNLP — enabling tokenized execution event intelligence and structured analysis of unstructured data at enterprise scale.

US 11,927,436
Systems and Methods for Machine Learning (ML)-Based Advanced User Interactions
ISSUED

Enables personalized generative AI experiences based on user context and behavior — the behavioral layer of the HIT identity model.

US 9,626,359 B1
Dynamic Data Encapsulation Systems
ISSUED

Browser-level and runtime data encapsulation for secure policy enforcement — foundational to nxtlinq's zero-trust data boundary architecture.

US 12,418,417
AI Agent Lifecycle Management and Authentication Systems using Decentralized Cryptographic Anchoring
ISSUED

Governs identity and trust scoring for autonomous AI agents — the core AIT issuance and lifecycle management patent.

US 12,483,411
Blockchain-Based Artificial Intelligence Agent Life Cycle Management
ISSUED

Expands the AIT framework with real-time authentication and behavioral auditing — the blockchain ledger evidence architecture.

US 12,574,251
Blockchain-Based Platform-Independent Personal Identity Profiles
ISSUED

Introduces the Human Identity Token (HIT) for secure cross-platform identity — the human-side anchor of the entire governance chain.

US 18/777,042
Context-Aware Generative Artificial Intelligence System
PENDING

Reusable global/local context memory and enterprise context lifecycle control — the foundation of nxtGPT's governed, context-persistent architecture.

US 19/314,928
Blockchain-Based AI Agent Life Cycle Management (Continuation)
PENDING

Expands the AIT framework with an identity-anchored, audited learning loop — enabling AI agents to improve over time without losing governance traceability.

US 19/370,524
Blockchain-Based AI Agent Life Cycle Management (Continuation)
PENDING

Adds tokenized feedback (TDPs) and human-in-the-loop reinforcement mechanisms — the compliance-auditable feedback layer of the AI lifecycle.

REGULATORY FRAMEWORKS

Built for the sectors where
compliance isn't optional.

🏥
HIPAA

Healthcare data sovereignty and access control for AI agents operating on PHI.

HOW NXTLINQ HELPS
  • AIT scope limits PHI access to authorized agents only

  • Every PHI interaction logged to immutable ledger

  • HIT ensures human accountability for all AI-driven access

  • Air-gap deployment for no-cloud PHI environments

📊
SOX

Financial reporting controls and audit trails for AI agents in finance workflows.

HOW NXTLINQ HELPS
  • Deterministic replay of all AI financial actions

  • Blockchain-anchored evidence of authorization chain

  • Policy enforcement prevents unauthorized financial scope

  • Tamper-evident records satisfy external auditor requirements

🏛️
FedRAMP

Federal cloud security for AI deployments in government and defense environments.

HOW NXTLINQ HELPS
  • Full air-gap deployment for classified environments

  • Zero-trust identity controls aligned to NIST 800-53

  • Private blockchain ledger — no public chain exposure

  • On-premises model support for sovereign inference

🌍
GDPR / Data Sovereignty

Data residency, right-to-audit, and AI decision accountability for European deployments.

HOW NXTLINQ HELPS
  • HIT isolation ensures per-subject data boundary enforcement

  • AI decision lineage supports right-to-explanation requirements

  • Regional deployment with no cross-border data flow

  • Audit logs exportable for data subject access requests

READY TO GET STARTED?

Your auditors will want
to see this.

Get the SOC 3 public report now, or request a 15-day POC to see nxtlinq's evidence architecture in your own environment.

bottom of page