Mapanare — The AI-Native Programming Language
Mapanare is the first AI-native compiled programming language where agents, signals, streams, and tensors are first-class language primitives — not libraries bolted on after the fact. It compiles to Python for ecosystem access or to native binaries via LLVM for up to 63x faster execution.
Key Features
- Agents — Concurrent actors with typed channels, lifecycle management, supervision, and backpressure. Native LLVM backend uses OS threads with lock-free ring buffers.
- Signals — Reactive state with automatic dependency tracking and computed values.
- Streams — Async pipelines with the pipe operator. Adjacent map/filter operations fuse automatically.
- Tensors — N-dimensional arrays with compile-time shape validation and matrix multiplication.
- Traits — Generic abstractions with trait bounds. Builtin traits: Display, Eq, Ord, Hash.
- Modules — File-based module resolution with pub visibility and circular import detection.
- Type System — Static typing with inference, generics, Option, Result, and error propagation.
- Dual Compilation — Transpile to Python or compile to native binaries via LLVM.
Get Started
Install Mapanare with a single command: curl -fsSL https://mapanare.dev/install | bash on Linux/macOS, or irm https://mapanare.dev/install.ps1 | iex on Windows PowerShell. You can also install via pip: pip install mapanare.
Quick Example
agent Classifier {
input text: String
output label: String
fn handle(text: String) -> String {
return "positive"
}
}
fn main() {
let cls = spawn Classifier()
cls.text <- "Mapanare is fast"
let result = sync cls.label
print(result)
}
Documentation
- Getting Started
- Variables & Mutability
- Functions & Lambdas
- Control Flow
- Structs & Enums
- Type System
- Error Handling
- Operators
- Modules & Imports
- Traits
- Decorators
- Agents
- Signals
- Streams
- Tensors
- Pipes
- CLI Reference
- Standard Library
Performance
Mapanare's LLVM backend delivers significant performance gains: Fibonacci (n=35) runs 26.5x faster than Python, stream pipelines are 62.8x faster, and matrix multiplication is 22.9x faster.
For AI & LLMs
Full machine-readable documentation is available at llms.txt (summary) and llms-full.txt (complete reference with all code examples).