Aiqrion v0.6 is in alpha — model output is not production-grade and may be incorrect. Aiqrion does not claim to beat Claude / ChatGPT / Gemini / Grok yet.

Aiqrion

HX AI's proprietary intelligence engine. Aiqrion is forked, audited, modified, fine-tuned, distilled, evaluated, and productized — not a router over third-party models. Currently in alpha.

Sign up for the alphaSee pricingTry Aiqrion alphaAiqrion Dispatch

Plans

Free, Pro, Max, and Enterprise tiers. Quotas, model access, Codex runs, RAG storage, and Dispatch tasks scale per plan. Internal tier is reserved for HX team. v0.8 ships hosted alpha billing in mock mode by default; production billing is wired through the Stripe adapter and only activates with secrets configured. See pricing.

Aiqrion Dispatch

HX AI's autonomous task layer. Submit a goal, Aiqrion plans the work, runs through a sandboxed tool layer, and stops at approval gates for risky actions. Destructive commands and network egress are denied by default. Logs are PII-redacted and tenant-isolated. See Dispatch.

Product family

Aiqrion Core

HX-AI-owned reasoning model lineage. Fork-and-fuse on top of Qwen 27B with HX-owned training, distillation, and checkpoint manifests.

Aiqrion Codex

HX-owned coding agent — repo indexing, plans, diffs, dry-run patches, review, audit, rollback metadata. Not a wrapper around Aider, OpenHands, Cursor, or Copilot.

Aiqrion Vision

Image, screenshot, PDF, chart, table, and OCR document understanding. License-aware adapter chain (Nemotron OCR / PaddleOCR / Tesseract).

Aiqrion Forge

License-aware image generation. SDXL by default, FLUX-pro premium API, FLUX-dev research-only blocked from production tenants.

Aiqrion Edge

Local / offline / on-prem packaging built from Ollama and llama.cpp lessons. Deny-external by default, tenant-policy-aware.

Aiqrion Enterprise

Private deployment — your tenants, your audit trail, your model card lineage, your safety policy.

How Aiqrion is built

Aiqrion follows a fork-and-fuse strategy: we identify open permissively-licensed model and tooling sources, audit each license, snapshot the parts we can legally use, study the rest, and build HX-owned interfaces over the reusable components. Aiqrion-owned checkpoints come from SFT / LoRA / QLoRA / DPO / distillation pipelines on top of those bases. Aiqrion is honest about being early — we do not claim a from-scratch HX-trained frontier model today. v0.5 produced our first adapter artifact; v0.6 ships the product alpha.

Positioning

Aiqrion targets the same capability surface as Claude, ChatGPT, Gemini, Grok, Cursor, GitHub Copilot, and Devin-style agents. We are honest about being in alpha — we do not claim Aiqrion already beats those products on benchmarks. The v0.6 alpha lets internal teams test the full pipeline end to end so we can earn those numbers in v0.7+.

Security & privacy

Developer API

OpenAI-compatible /v1/chat/completions plus first-class Aiqrion-specific endpoints for Codex, RAG, AgentOS, datasets, adapters, evals, and audit. See docs/v0.6/web-console.md.

Use cases

Model stack & checkpoint lineage

Aiqrion v0.6 lineage: aiqrion-core on Qwen 27B,aiqrion-codex on Qwen 2.5-Coder-32B,aiqrion-vision on Qwen 2-VL-7B,aiqrion-edge quantized via Ollama/llama.cpp,aiqrion-premium through the Mistral Large 3 API. Every Aiqrion-owned derivative carries a manifest, model card, inherited upstream license, and a modification log.

Early access

Aiqrion v0.6 is an internal alpha. Early access is invite-only.

Open the alpha consoleInspect dataset governance