The Principle المبدأ

Dimension 1 — Root (الجذر)

Three consonants encode an entire semantic domain. ك-ت-ب (k-t-b) = everything about writing. ع-ل-م = everything about knowledge. 152 roots across 15 domains.

Dimension 2 — Pattern (الوزن)

A morphological operator transforms the root's semantic field. ك-ت-ب × فاعل = كاتب (writer). ك-ت-ب × مفعلة = مكتبة (library). Same root, different output — all formally derived.

Architecture البنية
  "Schedule a meeting with the team tomorrow"         ← English or Arabic
                         ↓
               ┌─────────────────────┐
               │  encodeLocal()       │  ← 5-layer deterministic attention
               └────────┬────────────┘
                        ↓
          ┌──────────────────────────┐
          │       AlgebraToken        │
          │  intent: seek            │
          │  root:   جمع (gathering)  │
          │  pattern: place          │
          └────────────┬─────────────┘
                       ↓
               ┌───────────────┐
               │  engine.reason │  ← 80 symbolic rules, pure function
               │  seek × place  │
               │     → schedule │
               └───────┬───────┘
                       ↓
               ┌─────────────────┐
               │  decodeLocal()  │  ← Template-based, 16 action types
               └────────┬────────┘
                        ↓
  "I'll schedule a meeting with the team for tomorrow."
Try the Chat → Use Cases →
Live Examples أمثلة حية

Every example runs in your browser — no server, no API, no GPU. Click any input to see the algebra decomposition and tool routing.

What It Does ماذا يفعل

✓ Today (Symbolic Engine)

  • Intent classification from natural language (EN + AR)
  • Deterministic tool routing across 3 domains (telecom, banking, healthcare)
  • Multi-step request decomposition ("check balance and change plan")
  • 66 tools, 152 roots, 80 symbolic rules
  • Sub-millisecond, offline, zero cost

✓ Today (1.5M Neural Model)

  • Handles typos, slang, and dialectal Arabic
  • 90.2% validation accuracy on structured tasks
  • Runs in-browser via ONNX (6 MB model)
  • Same algebra vocabulary — model learns text→token mapping
  • Try it in the Chat page

→ Future: Replace LLM Calls

In production AI systems (customer support, sales agents), LLMs handle intent classification, tool routing, and planning — 3-5 API calls per message at $0.01-0.05 per conversation. AAE can replace the structured portion (classification, routing, decomposition) with sub-millisecond deterministic inference, reducing LLM calls by 50-70% and leaving the LLM only for genuinely complex reasoning and response generation.