We're in the middle of a fundamental shift in how code gets written. AI-assisted development isn't a novelty anymore—it's becoming the default. GPT-4, Claude, Copilot, and their successors generate code at scale, but there's a problem: every token counts.
Language models consume and produce text one token at a time. Multi-character keywords like
function, struct, or async don't just take up visual space—they
consume tokens in the model's context window, increase API costs, and slow down generation. When you're
working with a 100K token context limit and trying to include documentation, examples, and generated code,
every saved token matters.
This is why we built Vais.
The Token Efficiency Problem
Let's talk numbers. A typical struct definition in Rust requires keywords like struct,
fn, and impl. These are tokenized inefficiently—struct alone
is often 2-3 tokens depending on the tokenizer. In Vais, the equivalent keyword is S:
a single character, always one token.
Across a realistic codebase, this adds up fast. Our benchmarks show Vais programs use 30-40% fewer tokens than equivalent Rust code. For AI models working under tight context limits, that's the difference between fitting your entire module in context or having to truncate critical parts.
Single-Character Keywords: Not Just Shorter, Smarter
Vais uses single-character keywords throughout:
Ffor function definitionsSfor struct declarationsIandEfor if/else branchesLfor loopsMfor match expressionsVfor variable bindingsTfor trait definitions
This isn't code golf. It's a deliberate design choice optimized for both AI generation and human
readability in the context window. When an AI model sees F fib(n: i64) -> i64,
it immediately knows this is a function. The cognitive load for humans is minimal—you adapt in
minutes—but the token savings compound over thousands of lines.
// Vais: ~18 tokens
F factorial(n: i64) -> i64 {
I n == 0 { 1 }
E { n * @(n - 1) }
}
// Rust: ~32 tokens
fn factorial(n: i64) -> i64 {
if n == 0 { 1 }
else { n * factorial(n - 1) }
}
The @ Operator: Self-Recursion Without Repetition
One of Vais's standout features is the @ operator for self-recursion. Instead of
repeating the function name in recursive calls, you use @:
F fib(n: i64) -> i64 {
I n <= 1 { n }
E { @(n - 1) + @(n - 2) }
}
This saves tokens (no need to tokenize fib twice) and makes refactoring easier.
Rename the function? The recursive calls update automatically. For AI models, it's one less
identifier to track across context.
Expression-Oriented: Everything Returns a Value
Vais is expression-oriented. If/else, match, and blocks all return values. The last expression
in a block is implicitly returned—no return keyword needed:
F abs(x: i64) -> i64 {
I x < 0 { -x } E { x }
}
This design reduces syntactic noise and makes the code more composable. Functional patterns feel natural without sacrificing imperative control flow when you need it.
LLVM Backend: Native Performance, No Compromises
Token efficiency doesn't mean runtime slowness. Vais compiles to native code via LLVM with full optimization support (LTO, PGO). Benchmarks show performance on par with C and Rust. You get the ergonomics of a modern high-level language without the interpreter overhead.
The compiler performs aggressive inlining, dead code elimination, and SIMD autovectorization where applicable. For systems programming tasks—parsers, network servers, embedded code—Vais delivers predictable, low-latency execution.
Full Toolchain: Production-Ready From Day One
Vais ships with a complete development toolchain:
- LSP server with autocomplete, go-to-definition, and inline diagnostics
- Formatter for consistent code style (no debates)
- REPL for interactive experimentation
- Package manager with centralized registry
- IDE plugins for VSCode and IntelliJ
- Debugger integration via LLDB
This isn't a weekend experiment. The tooling ecosystem is stable, documented, and actively maintained.
How Vais Compares
vs. Rust: Vais trades Rust's borrow checker for simplicity and token efficiency. Memory safety is still enforced, but through runtime checks rather than compile-time lifetimes. For AI-generated code, this reduces cognitive overhead and makes corrections easier.
vs. Python: Python is token-heavy with keywords like def,
class, and verbose decorators. Vais is comparable in conciseness but compiles to
native code, giving you 10-100x speedups for compute-intensive tasks.
vs. Go: Go prioritizes simplicity but still uses multi-character keywords. Vais pushes minimalism further while adding generics and traits (which Go only recently got).
Try It Yourself
Vais is open source and ready to use. Install it via Homebrew, Cargo, or Docker and start writing code that's optimized for the AI-assisted future:
brew tap vaislang/tap && brew install vais
The interactive playground lets you experiment in your browser—no installation required. Write a function, see the compiled output, and watch how token efficiency translates to real savings.