Singapore · 2026 · SEA-first

ORENTHAL

Frontier Intelligence Platform

"Not the loudest god.
The one who holds the sky."

Three frontier models. One OpenAI-compatible API. Built from the ground up in Singapore by Matrix.Corp — a lab that believes the next leap in AI is not bigger, but more complete.

OpenAI-compatible · No seat licences · Pay per token

3
Frontier Models
1M
Max Context
$0.80
Starting Price
2026
Singapore
The Orenthal API

Three Minds.
One Interface.

All models served via a single OpenAI-compatible endpoint. Switch models by changing one string.

Orenthal Zeus
Early Access
General Frontier · SACM Architecture · Debut Model
"Intelligence, complete."
Orenthal's first original architecture. Not a finetune. Not a distillation. Sovereign Attention with Crystalline MoE — built from the ground up to compete with GPT-5, Claude, and Gemini. The synthesis of everything Matrix.Corp has learned.
I
Sovereign Reasoning
Intent Crystallisation Pre-Pass reasons before every response. Always.
II
Calibrated Honesty
Epistemic Calibration Head flags exactly what Zeus is uncertain about.
III
Human Intelligence
EQ Engine V3 — 8 layers, bidirectional — notices the human in every message.
SACM Architecture Intent Crystalliser 3B EQ Engine V3 Epistemic Calibration 256K – 1M Context Closed Source
Model Tiers
zeus-70b 70B · ~18B active · 256K $1.00
zeus-200b 200B · ~32B active · 512K $1.00
zeus-1t ~1T · ~80B active · 1M $1.00
Zeus Extensions
GET /v1/zeus/epistemic
POST /v1/zeus/clarify
POST /v1/zeus/restate
🌐
Matrix Lattice
Released
Frontier Agentic MoE · 17 Modules · 1M Context
"Frontier Agentic MoE."
Frontier-scale mixture-of-experts. 17 custom intelligence modules working in coordination. 1M token context. EQ Engine V2, MACL, HCCE, Causal Reasoning Graph, Long-Horizon Task Planner.
lattice-120b 120B · 64 top-4 · 1M $0.80
lattice-430b 430B · 128 top-4 · 1M $0.80
lattice-671b 671B · 256 top-4 · 1M $0.80
EQ Engine V2 MACL HCCE 1M Context Closed Source
🔷
Matrix Vexa
Coming Soon
Crystalline Intelligence Substrate · Non-Neural
"Not a model. A new way of knowing."
Vexa is not a language model. It is a living lattice of Glyphs — structured meaning objects that grow through Crystallisation instead of training. No GPU. No gradients. No interpolation. Zero hallucination from probability compression.
orenthal/vexa Non-neural · CPU · Real-time Crystallisation $2.00
No Training Required Glyph Lattice Lume Language Closed Source Zero Hallucination
Developer Interface

The Orenthal API

OpenAI-compatible. Drop-in replacement. Change one URL and you're live.

Pricing — Per 1M Tokens
Model Architecture Price Status
⚡ Zeus SACM · 70B–1T $1.00 Access
🌐 Lattice MoE · 120B–671B $0.80 Live
🔷 Vexa Crystalline · Non-neural $2.00 Soon
Endpoints
POST /v1/chat/completions Chat · stream supported
GET /v1/models List all models
GET /v1/zeus/epistemic Zeus confidence metadata
POST /v1/zeus/clarify Clarify a flagged ambiguity
POST /v1/zeus/restate Restate with different EQ register
Python
JavaScript
cURL
# pip install openai
import openai

client = openai.OpenAI(
    base_url="https://orenthal.onrender.com/v1",
    api_key="your-orenthal-key",
)

# ⚡ Zeus — $1.00 / 1M tokens
response = client.chat.completions.create(
    model="orenthal/zeus-70b",
    messages=[
        {"role": "system", "content": "You are Zeus."},
        {"role": "user",   "content": "Hello, Orenthal."},
    ],
    temperature=0.7,
    max_tokens=1024,
)
print(response.choices[0].message.content)
// npm install openai
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://orenthal.onrender.com/v1",
  apiKey: "your-orenthal-key",
  dangerouslyAllowBrowser: true,
});

// 🌐 Lattice — $0.80 / 1M tokens
const response = await client.chat.completions.create({
  model: "matrix-corp/lattice-120b",
  messages: [
    { role: "user", content: "Plan a 6-month roadmap." }
  ],
  stream: true,
});

for await (const chunk of response) {
  process.stdout.write(
    chunk.choices[0]?.delta?.content ?? ""
  );
}
# 🔷 Vexa — $2.00 / 1M tokens (coming soon)
curl https://orenthal.onrender.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer your-orenthal-key" \
  -d '{
    "model": "orenthal/zeus-70b",
    "messages": [
      {
        "role": "user",
        "content": "Hello, Orenthal."
      }
    ],
    "temperature": 0.7,
    "max_tokens": 1024,
    "stream": false
  }'
Matrix.Corp Open Research

The Research Foundation

Orenthal is built on the research of Matrix.Corp — an open research lab working on novel AI paradigms from Singapore.

🌐
Matrix Lattice
Frontier Agentic MoE
671B parameter frontier MoE with 17 custom modules. 1M context. The most capable system in the Matrix.Corp stack.
Released
🩸
Matrix ECHO
Living Error Memory
27B coding LLM in Rust. Mistakes crystallise into Scars — typed memory objects. Pre-scans lattice before every response.
Building
🌌
Zenith
Reasoning + EQ
The original EQ Engine architecture. Ring Attention, MoE 12 experts top-2. Zeus's EQ Engine V3 evolved from Zenith V1.
Preview
🔬
Vortex Scientific
Deep Science Reasoning
Custom 50K science tokenizer. Hybrid SSM+Attention. Equation, LaTeX, Molecular, Numerical domain modules.
Preview
🌿
Touch Grass
Music AI
LoRA on Qwen3.5. Tab & Chord Module, Music Theory Engine, Ear Training, EQ Adapter with 4 emotional modes.
Preview
🔷
Matrix Vexa
Crystalline Intelligence
Non-neural knowledge substrate. Glyphs instead of weights. Crystallisation instead of training. A genuinely new paradigm.
Paused / Resuming
All open-source research published on HuggingFace under Matrix-Corp.
View Matrix.Corp →
The Orenthal Thesis

Completeness
Over Scale.

Every frontier lab is trying to be bigger. Orenthal is trying to be complete.

I
Sovereign Reasoning
Zeus does not interpolate an answer to a question it has not thought about. The Intent Crystallisation Pre-Pass is an architectural constraint — not a prompt trick, not a parlour act. Zeus reasons before it speaks. Always.
II
Calibrated Honesty
The Epistemic Calibration Head is trained jointly with the model — not bolted on after. When Zeus is uncertain, it says so specifically. Not "I might be wrong." Which clause. Which domain. Which span. Honesty as architecture.
III
Human Intelligence
Eight dedicated EQ layers run in parallel with the reasoning stack. EQ Engine V3 is not sentiment analysis appended as a wrapper. It is bidirectional with reasoning. It is aware of the full conversation arc. It notices the person, not just the words.
The Statement
"Zeus is Orenthal's proof of concept — not for the world, but for ourselves. Proof that a boutique lab, built from the ground up in Singapore, can produce a frontier model that belongs in the same conversation as the biggest labs on earth. Not by being bigger. By being more complete."
— Zeus Spec, 2026