Founding ML Research Engineer — PhD (Top ML program) or Top Lab/Trading (NYC, in-person) w/ .10%-1.00% Equity
Bounty Amount: $15,000 - 25,000 (25k cap)
Company Name: Zoa Research
Role Type: Full-Time
Location: New York City – fully in-person. Must be willing to relocate to NYC; Zoa can sponsor TN / E-3 visas.
Salary / Hourly Rate: $150,000 - $400,000 per year
Benefits: Ideally US Citizen, willing to sponsor TN/E3 Visa,Early, meaningful equity at founding-team level (0.10%–1.00%).,Fully in-person research environment with tight feedback loops and direct founder access.,Opportunity to work on problems spanning trading, forecasting, and scientific discovery.
Role Information
Role Overview: N/A
Responsibilities: Over time, help define Zoa’s roadmap for using forecasting models beyond trading, including applications in scientific discovery and complex real-world systems.
Qualifications: Education-first bar (one of the following):
PhD in ML / CS / Statistics (or very closely related) from a top research university.
OR equivalent research pedigree from a top ML research lab (e.g., OpenAI, Anthropic, DeepMind, FAIR, Mistral, NVIDIA Research, MSR, Google Brain) or elite quantitative trading firm (e.g., Jane Street, Citadel, HRT, Two Sigma, Renaissance)., PhD in ML / CS / Statistics (or very closely related) from a top research university., OR equivalent research pedigree from a top ML research lab (e.g., OpenAI, Anthropic, DeepMind, FAIR, Mistral, NVIDIA Research, MSR, Google Brain) or elite quantitative trading firm (e.g., Jane Street, Citadel, HRT, Two Sigma, Renaissance)., Hands-on model training: proven experience training models end-to-end (data → objective → training loop → evaluation). Not just calling hosted APIs., Research fundamentals: strong grounding in ML theory, statistics, and experimental design; able to reason about why methods work and how to improve them., Modern ML frameworks: expert with PyTorch or JAX; comfortable implementing and modifying custom architectures, losses, and training loops., Strong Python engineering: write clean, production-grade research code with tests, documentation, and thoughtful abstractions., Ownership mindset: comfortable owning open-ended research problems from idea → experiment → production system., In-person commitment: able to work full-time, fully in-person from New York City (relocation OK; TN / E-3 visa support available).
Minimum Requirements: Education-first bar: PhD in ML/CS/Stats (or adjacent) from a top university OR equivalent research pedigree from OpenAI/Anthropic/DeepMind/FAIR/Mistral/NVIDIA Research/MSR/Google Brain or Jane Street/Citadel/HRT/Two Sigma/Renaissance.,Hands-on experience training models end-to-end (data pipelines, training loops, evaluation). Not just using hosted APIs.,Strong ML fundamentals and experimental rigor (can design and interpret research-grade experiments).,Expert PyTorch or JAX experience; comfortable implementing custom architectures and training code.,Able to work full-time, fully in-person in NYC (relocation OK).
Screening Questions: (Optional Video). This step is completely optional. If you’d like, record a short 2–3 minute video introducing yourself and your experience — or share a recording of your interview with the recruiter if that’s easier. You can upload the link via Loom or Google Drive. This just helps us get to know you better, but there’s no pressure if you’d prefer to skip it.,(Optional Portfolio / GitHub / Google Scholar) If available, please share a link to your GitHub, portfolio, Google Scholar/arXiv page, or any recent projects or papers you’ve worked on. This is entirely optional but helps provide more context about your work and research interests.,Tell us about an ML project where you trained or significantly improved a model beyond using an off-the-shelf API. What was the problem, how did you design the model and training pipeline, and what measurable impact did your work have (e.g., accuracy, robustness, speed, or business / user impact)?
Company Information
About Company: N/A
Culture: N/A
Additional Information
Interview Process: Step 1 – Short survey. After reviewing resumes, Hestus sends a short online questionnaire (≈15 minutes) to candidates who meet the basic requirements., Step 2 – Initial call. A 30–45 minute video call over Google Meet to learn more about your background, interests, and what you’re looking for, and to share more about Hestus, the role, and how they work., Step 3 – Take-home coding & ML exercise. A practical coding challenge (target ≤ 8 hours) focused on Python, problem-solving, and ML fundamentals relevant to Hestus’s work., Step 4 – Onsite / virtual technical interviews. A series of deeper technical interviews with the CEO, CTO, and engineering team. Candidates in the Bay Area are invited to the San Mateo office; others join via Google Meet. These sessions cover ML system design, research thinking, and production engineering., Step 5 – Offer & closing. For candidates who pass the loop, Hestus discusses compensation details, equity, and start date, and moves quickly to a formal offer.
Day to day: You’ll spend most of your time doing end-to-end machine-learning work on hard forecasting problems. A typical week blends: designing new model architectures and training runs in Python/JAX/PyTorch, cleaning and curating cross-domain datasets, and building evaluation pipelines to understand what’s actually improving forecast quality. Some days you’ll be reproducing or extending recent research results; other days you’ll be shipping small productized models or tools that other researchers at Zoa can use. You’ll own experiments from idea → implementation → analysis, and you’ll write clean, production-oriented code so that promising ideas can graduate into the core forecasting engine.
Team: You’ll report directly to the founders / research leads and work closely with a small, high-caliber team of ML engineers and quantitative researchers. The team is flat and highly collaborative – everyone contributes to ideas, code, and experiments. You’ll often pair with a founder to scope projects, choose modeling approaches, and set success criteria. Because the company is still very early, you’ll have a real seat at the table on decisions about research direction, infrastructure, and how Zoa’s forecasting models are exposed to customers.
Growth: Shape Zoa’s research roadmap for cross-domain forecasting and model evaluation., Raise the technical bar for future ML hires and mentor more junior researchers and engineers., Grow into a Staff / Principal Research Engineer or early research-team lead as the company scales.
Ideal Candidate Profile: Fundamentals-driven ML researcher – You have serious depth in machine learning and statistics, care about theoretical grounding, and enjoy thinking about why models work (or don’t) as much as making the charts go up.Production-minded engineer – You write clean, well-structured Python, are comfortable with modern deep learning frameworks (JAX / PyTorch or similar), and take pride in building training and evaluation code that other people can rely on.End-to-end owner – You like owning messy, open-ended modeling problems: defining the question, selecting data, designing experiments, tuning hyperparameters, and turning the best ideas into robust systems that can run at scale.Curious, rigorous collaborator – You enjoy reading papers, proposing new approaches, and debating trade-offs with other strong researchers. You can clearly explain modeling choices and results to technical peers and non-ML stakeholders.Excited about forecasting & decision-making – You’re motivated by building models that actually help people reason better about the future, not just chasing benchmark scores. You’d be happy to spend the next several years pushing the frontier of general-purpose forecasting models.