← Back to Blog

Today we’re excited to share more about Aurelia — the AI-native systems programming language at the heart of the Deepcomet AI ecosystem.

Why a New Language?

The question we hear most often is: “Why not just use Rust or C++ with ML libraries?” The answer lies in the fundamental mismatch between today’s languages and AI workloads.

Current languages treat tensors as library objects. They handle automatic differentiation through runtime tracing. They target GPUs through thick abstraction layers. Every one of these choices introduces overhead.

Python gives you expressiveness but not performance. C++ gives you performance but not safety. Rust gives you safety but has no native understanding of neural computation. You’re always trading something essential away.

Aurelia starts from different assumptions:

  • Tensors are first-class types, not library objects
  • Automatic differentiation happens at compile time, not runtime
  • Hardware targeting goes through MLIR, not vendor-specific APIs
  • Memory management understands AI-specific patterns like gradient buffers

Key Features

First-Class Tensor Types

In Aurelia, Tensor<f32, 2> is as fundamental as i32 or bool. The compiler understands tensor shapes, validates operations at compile time, and generates optimal code for the target hardware.

// Tensors are native types
let weights: Tensor<f32, 2> = Tensor::random([256, 512]);
let biases: Tensor<f32, 1> = Tensor::zeros([512]);

// Operations are native syntax
let output = (input @ weights) + biases;

Automatic Differentiation

The gradient() function is a compiler intrinsic. The compiler builds computation graphs during compilation, analyzes data dependencies, and generates optimized backward passes. This means:

  • Zero overhead for forward passes when gradients aren’t needed
  • Optimal gradient computation — the compiler can fuse operations, eliminate redundant computations, and schedule gradient calculations across available hardware
  • Compile-time verification — shape mismatches in gradient computation are caught before runtime

MLIR Compilation

Aurelia compiles through MLIR (Multi-Level Intermediate Representation), which allows targeting multiple hardware backends:

  • CPUs — with vectorized SIMD instructions
  • GPUs — via GPU-specific MLIR dialects
  • NPUs — direct targeting of neural processing units like Qualcomm Hexagon

Memory Safety Without GC

Like Rust, Aurelia provides compile-time memory safety without a garbage collector. But it extends this with AI-specific patterns:

  • Tensor lifetime tracking — the compiler understands when tensor memory can be reused
  • Gradient buffer management — automatic allocation and deallocation of gradient storage
  • Zero-copy tensor views — safe slicing and reshaping without data copies

Integration with the Deepcomet AI Stack

Aurelia isn’t designed in isolation — it’s the language layer of a vertically integrated computing platform:

  • Zenith Kernel understands Aurelia’s @target annotations and can transparently route workloads between NPUs and GPUs based on real-time load conditions
  • SkyOS native apps are written in Aurelia, sharing a unified type system and memory model with the operating system itself
  • The Forge migrates legacy C++, Java, and Python codebases to idiomatic Aurelia with automated test generation

This vertical integration means optimizations compound across layers in ways that are impossible when each layer is developed independently.

The Road Ahead

Aurelia is in active development. We’re building the compiler frontend, MLIR backend, and standard library in parallel. Key milestones include:

  • Complete MLIR backends for Qualcomm Hexagon and Apple Neural Engine
  • Package manager and build system (AureliaForge)
  • Standard library for networking, filesystem, and async I/O
  • Language server protocol (LSP) for full IDE support
  • Formal language specification and reference manual

Follow the development on our ecosystem page and GitHub.

The future of AI isn’t about better libraries — it’s about better languages. Aurelia is that language.