Company
A research lab that operates a cloud.
An R&D-led GPU cloud and inference platform. Founded in 2022 out of an inference-acceleration research program. Peer-reviewed research output, multi-region footprint, and a brand family — EGILAX, MED.REPORT, SEFIROT.AI, PULSAR.GLOBAL.
- Founded
- 2022
- Regions
- 7
- Day-one fleet
- B200 / B300
- Operating model
- Research-led
- Ownership
- Private
Timeline
Four years, in five entries.
The condensed version. The blog has the unabridged.
- 2022
Founded as a research lab.
The founding team incorporates iframe.ai out of an inference-acceleration research program. First paper drops within six months — a measurable reduction in attention compute on open-weight foundation models.
- 2023
First production cluster.
Initial racks of NVIDIA A100 in a Cologix-operated colocation facility, leased to a handful of friend-of-the-lab customers. The runtime ships before the marketing site.
- 2024
Blackwell day-one access.
Direct allocation from NVIDIA, paired with Dell PowerEdge XE9680 and Supermicro NVL72-class systems built on Samsung HBM, puts iframe.ai among the first independent clouds on Blackwell.
- 2025
Managed inference, GA.
The inference engine built in the lab becomes a product. Twenty-times-vanilla throughput on the open-source model catalog, validated on customer workloads, published with reproducible eval data.
- 2026
Today.
Reserved capacity contracts with AI labs, neoclouds, and enterprise platform teams. Multi-region footprint, peer-reviewed research output, and a brand family — EGILAX, MED.REPORT, SEFIROT.AI, pulsar.global.
Principles
Six things that don't move.
Research first, then product.
Every line of the runtime came from an ablation we ran. The org chart matches the priority — the research lab and the inference team share a manager.
Honest pricing.
List price is what you pay. We publish hyperscaler prices on our own pricing page. The CSV behind every comparison ships in our pricing-data repo.
Reproducible benchmarks.
Every claim links to its methodology. Every benchmark is published with raw data. We open-source the eval harness so customers can verify on their own data.
We list our weaknesses.
Comparison pages have a 'where they win' column. The pricing page tells you when to use a hyperscaler. Customers buy clouds for years; we'd rather they trust us than discover the trade-offs late.
Long contracts are a privilege.
Reserved capacity costs us less; the discount goes back to the customer. The renewal is a formality, not a renegotiation.
We work in the open.
Engineering posts ship with repos. Papers ship with weights when we can. The status page has a public history. We tell customers when we are wrong.
Brand family
Four brands, one cap table.
EGILAX, MED.REPORT, SEFIROT.AI, and pulsar.global are sister companies — independent products with shared ownership and a common research backbone. Customers contracting with iframe.ai get visibility across the family.
See the familyThe lab publishes papers under the iframe.ai name. Engineers split time between the runtime and a sister property where the work lands first. Most customer-facing case studies feature at least one of the four.
See the labInfrastructure partners
Real hardware, real partners.
Our cluster runs on first-party allocations from NVIDIA on Blackwell-class GPUs, Samsung HBM3e memory, Dell and Supermicro chassis at the rack and pod level, and Cologix-operated colocation facilities for carrier-neutral interconnect. No marketplace resellers, no consumer SKUs in the rack.
- NVIDIA
GPUs · Blackwell day-one
- Samsung
HBM3e · DRAM
- Dell
PowerEdge XE9680
- Supermicro
NVL72-class systems
- Cologix
Colocation · Interconnect
Operating model
Where the work happens.
Headquarters · Engineering · Go-to-market
Long-context · Inference acceleration · Mixed-precision
Customer engineering · GDPR-resident workloads
Want to work with us?
Customer engagements start with a one-call scoping. Career applications are open. Press inquiries get a same-day reply.