Careers
We are selective because the work demands it. Every person who joins VedhaAI has direct ownership over systems that run in production and produce measurable outcomes.
We are a small, serious AI engineering company. We don't have a long career ladder or a large HR team. What we have is interesting work, a high bar, and a culture where technical depth is genuinely valued — not performed.
The people who thrive here are engineers and researchers who have gone deep on hard problems, who care about whether their systems actually work in production, and who find it unsatisfying to ship things they cannot measure and explain.
We are remote-friendly for senior roles, headquartered in Toronto, and interested in global talent. We believe in timezone overlap, asynchronous-first communication, and writing over meetings wherever possible.
What we value
We respect engineers who have gone genuinely deep on hard problems. Being a generalist is useful. Being expert at something — truly expert, with scars from production failures — is rare and valuable here.
You believe in tests, monitoring, documentation, and rollback plans. You have seen what happens when these are absent at 3am and you have learned the right lessons from it.
You define success before you start building. You are skeptical of demos. You insist on measuring what matters in the real deployment environment, not just on convenient benchmarks.
You say what you think, including when a proposed approach will not work. You update your views when evidence changes. You prefer being right over being comfortable.
You do not consider a task done when the code is merged. You watch the dashboards after deployment. You care about whether the system is actually delivering value to the people using it.
Research & engineering focus
Fine-tuning, DPO, RLHF, preference data collection, and alignment techniques applied at practical scale.
Quantisation, speculative decoding, continuous batching, and serving throughput at the GPU level.
Hybrid retrieval, embedding model adaptation, reranking architectures, and knowledge graph integration.
Multi-agent orchestration, planning, reliable tool use, and autonomous workflow automation in enterprise environments.
Automated evaluation pipelines, hallucination detection, golden dataset management, and production monitoring.
Feature stores, streaming pipelines, Lakehouse architectures, and data quality systems built for ML consumption.
Open positions
Hiring process
We review every application within one week. We look for demonstrated experience with production AI — not credential lists.
45 minutes with a senior engineer. We discuss real technical problems and past systems you have built. No whiteboard trivia.
A take-home exercise relevant to the role. We pay for your time. We do not ask for unpaid days of work.
A conversation about your technical vision, how you think about tradeoffs, and whether we are the right fit for each other.
We always want to hear from exceptional ML engineers, research scientists, and AI infrastructure builders — even when we have no specific role posted. Send us your work: GitHub, papers, technical writing, or a description of the hardest system you have built.
Get in touch