Founding ML Research Engineer — PhD (Top ML program) or Top Lab/Trading (NYC, in-person) w/ .10%-1.00% Equity
Bounty Amount: $15,000 - 25,000 (25k cap)
Company Name: Zoa Research
Role Type: Full-Time
Location: New York City – fully in-person. Must be willing to relocate to NYC; Zoa can sponsor TN / E-3 visas.
Salary / Hourly Rate: $150,000 - $400,000 per year
Benefits: Ideally US Citizen, willing to sponsor TN/E3 Visa,Early, meaningful equity at founding-team level (0.10%–1.00%).,Fully in-person research environment with tight feedback loops and direct founder access.,Opportunity to work on problems spanning trading, forecasting, and scientific discovery.
Role Information
Role Overview: N/A
Responsibilities: Over time, help define Zoa’s roadmap for using forecasting models beyond trading, including applications in scientific discovery and complex real-world systems.
Qualifications: Education-first bar (one of the following):
PhD in ML / CS / Statistics (or very closely related) from a top research university.
OR equivalent research pedigree from a top ML research lab (e.g., OpenAI, Anthropic, DeepMind, FAIR, Mistral, NVIDIA Research, MSR, Google Brain) or elite quantitative trading firm (e.g., Jane Street, Citadel, HRT, Two Sigma, Renaissance)., PhD in ML / CS / Statistics (or very closely related) from a top research university., OR equivalent research pedigree from a top ML research lab (e.g., OpenAI, Anthropic, DeepMind, FAIR, Mistral, NVIDIA Research, MSR, Google Brain) or elite quantitative trading firm (e.g., Jane Street, Citadel, HRT, Two Sigma, Renaissance)., Hands-on model training: proven experience training models end-to-end (data → objective → training loop → evaluation). Not just calling hosted APIs., Research fundamentals: strong grounding in ML theory, statistics, and experimental design; able to reason about why methods work and how to improve them., Modern ML frameworks: expert with PyTorch or JAX; comfortable implementing and modifying custom architectures, losses, and training loops., Strong Python engineering: write clean, production-grade research code with tests, documentation, and thoughtful abstractions., Ownership mindset: comfortable owning open-ended research problems from idea → experiment → production system., In-person commitment: able to work full-time, fully in-person from New York City (relocation OK; TN / E-3 visa support available).
Minimum Requirements: Education-first bar: PhD in ML/CS/Stats (or adjacent) from a top university OR equivalent research pedigree from OpenAI/Anthropic/DeepMind/FAIR/Mistral/NVIDIA Research/MSR/Google Brain or Jane Street/Citadel/HRT/Two Sigma/Renaissance.,Hands-on experience training models end-to-end (data pipelines, training loops, evaluation). Not just using hosted APIs.,Strong ML fundamentals and experimental rigor (can design and interpret research-grade experiments).,Expert PyTorch or JAX experience; comfortable implementing custom architectures and training code.,Able to work full-time, fully in-person in NYC (relocation OK).
Screening Questions: (Optional Video). This step is completely optional. If you’d like, record a short 2–3 minute video introducing yourself and your experience — or share a recording of your interview with the recruiter if that’s easier. You can upload the link via Loom or Google Drive. This just helps us get to know you better, but there’s no pressure if you’d prefer to skip it.,(Optional Portfolio / GitHub / Google Scholar) If available, please share a link to your GitHub, portfolio, Google Scholar/arXiv page, or any recent projects or papers you’ve worked on. This is entirely optional but helps provide more context about your work and research interests.,Tell us about an ML project where you trained or significantly improved a model beyond using an off-the-shelf API. What was the problem, how did you design the model and training pipeline, and what measurable impact did your work have (e.g., accuracy, robustness, speed, or business / user impact)?
Company Information
About Company: N/A
Culture: N/A
Additional Information
Interview Process: Step 1 – Short survey. After reviewing resumes, Hestus sends a short online questionnaire (≈15 minutes) to candidates who meet the basic requirements., Step 2 – Initial call. A 30–45 minute video call over Google Meet to learn more about your background, interests, and what you’re looking for, and to share more about Hestus, the role, and how they work., Step 3 – Take-home coding & ML exercise. A practical coding challenge (target ≤ 8 hours) focused on Python, problem-solving, and ML fundamentals relevant to Hestus’s work., Step 4 – Onsite / virtual technical interviews. A series of deeper technical interviews with the CEO, CTO, and engineering team. Candidates in the Bay Area are invited to the San Mateo office; others join via Google Meet. These sessions cover ML system design, research thinking, and production engineering., Step 5 – Offer & closing. For candidates who pass the loop, Hestus discusses compensation details, equity, and start date, and moves quickly to a formal offer.
Day to day: You’ll spend most of your time doing end-to-end machine-learning work on hard forecasting problems. A typical week blends: designing new model architectures and training runs in Python/JAX/PyTorch, cleaning and curating cross-domain datasets, and building evaluation pipelines to understand what’s actually improving forecast quality. Some days you’ll be reproducing or extending recent research results; other days you’ll be shipping small productized models or tools that other researchers at Zoa can use. You’ll own experiments from idea → implementation → analysis, and you’ll write clean, production-oriented code so that promising ideas can graduate into the core forecasting engine.
Team: You’ll report directly to the founders / research leads and work closely with a small, high-caliber team of ML engineers and quantitative researchers. The team is flat and highly collaborative – everyone contributes to ideas, code, and experiments. You’ll often pair with a founder to scope projects, choose modeling approaches, and set success criteria. Because the company is still very early, you’ll have a real seat at the table on decisions about research direction, infrastructure, and how Zoa’s forecasting models are exposed to customers.
Growth: Shape Zoa’s research roadmap for cross-domain forecasting and model evaluation., Raise the technical bar for future ML hires and mentor more junior researchers and engineers., Grow into a Staff / Principal Research Engineer or early research-team lead as the company scales.
Ideal Candidate Profile: Fundamentals-driven ML researcher – You have serious depth in machine learning and statistics, care about theoretical grounding, and enjoy thinking about why models work (or don’t) as much as making the charts go up.Production-minded engineer – You write clean, well-structured Python, are comfortable with modern deep learning frameworks (JAX / PyTorch or similar), and take pride in building training and evaluation code that other people can rely on.End-to-end owner – You like owning messy, open-ended modeling problems: defining the question, selecting data, designing experiments, tuning hyperparameters, and turning the best ideas into robust systems that can run at scale.Curious, rigorous collaborator – You enjoy reading papers, proposing new approaches, and debating trade-offs with other strong researchers. You can clearly explain modeling choices and results to technical peers and non-ML stakeholders.Excited about forecasting & decision-making – You’re motivated by building models that actually help people reason better about the future, not just chasing benchmark scores. You’d be happy to spend the next several years pushing the frontier of general-purpose forecasting models.
Founding ML Research Engineer — PhD (Top ML program) or Top Lab/Trading (NYC, in-person) w/ .10%-1.00% Equity
New York City – fully in-person. Must be willing to relocate to NYC; Zoa can sponsor TN / E-3 visas.
Full-Time
Est. Fee
$15,000 - 25,000 (25k cap)
Salary Range
$150,000 - $400,000 per year
Contract
10% of Salary, 60 day guarantee
Job Details
Job Overview
Key Responsibilities:
Design & Train ML Models: Develop large-scale ML models focused on quantitative forecasting, particularly for trading, exploring new architectures and objectives.
Experiment Execution: Conduct end-to-end experiments, analyze results, and iterate; work closely with founders to materialize research ideas into production systems.
Build Research Infrastructure: Develop the toolchain for training, evaluating, and deploying forecasting models while actively contributing to...
Responsibilities
This is a founding ML research role at Zoa. You’ll be part of a tiny team building the core quantitative models that power Zoa’s trading and long-term scientific forecasting engine. You’ll have substantial ownership over research direction, modeling choices, and how ideas turn into production systems.
What you’ll do
• Design and train large-scale ML models for quantitative forecasting, with an initial focus on trading and event prediction.
• Read, adapt, and extend recent ML research; implement ideas end-to-end in PyTorch or JAX.
• Design inductive biases, objectives, and evaluation protocols for forecasting under uncertainty.
• Run rigorous experiments: define hypotheses, design experiments, analyze results, and iterate quickly.
• Work closely with founders to translate open-ended research questions into concrete modeling directions and measurable outcomes.
• Build and maintain training, evaluation, and deployment tooling for forecasting models.
• Contribute to Zoa’s research culture: reading groups, sharing results, and setting a high bar for rigor and reproducibility.
Over time, help define Zoa’s roadmap for using forecasting models beyond trading, including applications in scientific discovery and complex real-world systems.
Qualifications
We’re looking for a research-first ML engineer with unusually strong fundamentals and educational pedigree. This role is optimized for people who come from elite research environments and want to build real-world forecasting systems from first principles.
Education-first bar (one of the following):
PhD in ML / CS / Statistics (or very closely related) from a top research university.
OR equivalent research pedigree from a top ML research lab (e.g., OpenAI, Anthropic, DeepMind, FAIR, Mistral, NVIDIA Research, MSR, Google Brain) or elite quantitative trading firm (e.g., Jane Street, Citadel, HRT, Two Sigma, Renaissance).
Hands-on model training: proven experience training models end-to-end (data → objective → training loop → evaluation). Not just calling hosted APIs.
Research fundamentals: strong grounding in ML theory, statistics, and experimental design; able to reason about why methods work and how to improve them.
Modern ML frameworks: expert with PyTorch or JAX; comfortable implementing and modifying custom architectures, losses, and training loops.
Strong Python engineering: write clean, production-grade research code with tests, documentation, and thoughtful abstractions.
Ownership mindset: comfortable owning open-ended research problems from idea → experiment → production system.
In-person commitment: able to work full-time, fully in-person from New York City (relocation OK; TN / E-3 visa support available).
Ideal Candidate
Fundamentals-driven ML researcher – You have serious depth in machine learning and statistics, care about theoretical grounding, and enjoy thinking about why models work (or don’t) as much as making the charts go up.
Production-minded engineer – You write clean, well-structured Python, are comfortable with modern deep learning frameworks (JAX / PyTorch or similar), and take pride in building training and evaluation code that other people can rely on.
End-to-end owner – You like owning messy, open-ended modeling problems: defining the question, selecting data, designing experiments, tuning hyperparameters, and turning the best ideas into robust systems that can run at scale.
Curious, rigorous collaborator – You enjoy reading papers, proposing new approaches, and debating trade-offs with other strong researchers. You can clearly explain modeling choices and results to technical peers and non-ML stakeholders.
Excited about forecasting & decision-making – You’re motivated by building models that actually help people reason better about the future, not just chasing benchmark scores. You’d be happy to spend the next several years pushing the frontier of general-purpose forecasting models.
Must-Have Requirements
Education-first bar: PhD in ML/CS/Stats (or adjacent) from a top university OR equivalent research pedigree from OpenAI/Anthropic/DeepMind/FAIR/Mistral/NVIDIA Research/MSR/Google Brain or Jane Street/Citadel/HRT/Two Sigma/Renaissance.
Hands-on experience training models end-to-end (data pipelines, training loops, evaluation). Not just using hosted APIs.
Strong ML fundamentals and experimental rigor (can design and interpret research-grade experiments).
Expert PyTorch or JAX experience; comfortable implementing custom architectures and training code.
Able to work full-time, fully in-person in NYC (relocation OK).
Screening Questions
1. (Optional Video). This step is completely optional. If you’d like, record a short 2–3 minute video introducing yourself and your experience — or share a recording of your interview with the recruiter if that’s easier. You can upload the link via Loom or Google Drive. This just helps us get to know you better, but there’s no pressure if you’d prefer to skip it.
2. (Optional Portfolio / GitHub / Google Scholar) If available, please share a link to your GitHub, portfolio, Google Scholar/arXiv page, or any recent projects or papers you’ve worked on. This is entirely optional but helps provide more context about your work and research interests.
3. Tell us about an ML project where you trained or significantly improved a model beyond using an off-the-shelf API. What was the problem, how did you design the model and training pipeline, and what measurable impact did your work have (e.g., accuracy, robustness, speed, or business / user impact)?
About the Company
Company Overview
Company Size: 1–10 Employees (team of ~5 today)
Industry: Quantitative research, AI & machine learning, systematic trading
Zoa Research is building powerful, cross-domain quantitative forecasting models, starting from a trading lab that’s already profitable. Historically, forecasting models have been narrow and hand-crafted – brilliant quants spend years tuning architectures inside a single domain. Zoa’s bet is that scale and better priors beat hand-tuned niche models: they train general event-forecasting engines that learn from data across markets and contexts, and use inference-time compute and multi-agent optimization loops to continually improve their predictions.
The long-term vision is to become the default forecasting engine for the real economy – from supply-chain and energy to climate and catastrophic-risk modeling – by dramatically improving how institutions reason about uncertainty and choose experiments. If “science is the taming of chance,” Zoa wants to be the infrastructure layer that helps labs, investors, and operators tame chance at scale.
Company Culture
Zoa is a small, research-heavy team in New York, founded by Greg Volynsky (Harvard Law) and Sam Damashek (ex-Jane Street options desk, CMU CS). The culture blends the rigor of a top quant trading firm with the curiosity of an academic lab: people write proofs and experiments, but they also ship models that make and lose real money every day.
Day-to-day, you’ll work closely with the founders and a handful of researchers on deep modeling problems, not wrapper work: designing inductive biases, building multi-agent optimization loops around models, and stress-testing them against real-world policies. The team values:
Truth-seeking over politics – clean experiments, careful reasoning, and honest error bars.
Ownership – every researcher is responsible for ideas and production impact, not just papers.
High bar, low ego – people are intense about the work but relaxed and kind with each other.
It’s the kind of place where you can spend years going very deep on forecasting, work directly with founders who’ve shipped high-stakes systems before, and still have a meaningful share in the upside if Zoa becomes the forecasting engine the rest of the world builds on.
Benefits
Retirement/401k
Health Insurance
Vision Insurance
Dental Insurance
Ideally US Citizen, willing to sponsor TN/E3 Visa
Early, meaningful equity at founding-team level (0.10%–1.00%).
Fully in-person research environment with tight feedback loops and direct founder access.
Opportunity to work on problems spanning trading, forecasting, and scientific discovery.
Relocation & Sponsorship
Relocation Assistance
Visa Sponsorship
What you can expect
Day to Day
You’ll spend most of your time doing end-to-end machine-learning work on hard forecasting problems. A typical week blends: designing new model architectures and training runs in Python/JAX/PyTorch, cleaning and curating cross-domain datasets, and building evaluation pipelines to understand what’s actually improving forecast quality. Some days you’ll be reproducing or extending recent research results; other days you’ll be shipping small productized models or tools that other researchers at Zoa can use. You’ll own experiments from idea → implementation → analysis, and you’ll write clean, production-oriented code so that promising ideas can graduate into the core forecasting engine.
Team
You’ll report directly to the founders / research leads and work closely with a small, high-caliber team of ML engineers and quantitative researchers. The team is flat and highly collaborative – everyone contributes to ideas, code, and experiments. You’ll often pair with a founder to scope projects, choose modeling approaches, and set success criteria. Because the company is still very early, you’ll have a real seat at the table on decisions about research direction, infrastructure, and how Zoa’s forecasting models are exposed to customers.
Growth
This is a founding-level ML / research role. In the near term you’ll help define Zoa’s core modeling stack and become the go-to owner for one or more parts of the forecasting system (data pipelines, training infrastructure, evaluation, or specific model families). Over the next few years you’ll have the opportunity to:
Shape Zoa’s research roadmap for cross-domain forecasting and model evaluation.
Raise the technical bar for future ML hires and mentor more junior researchers and engineers.
Grow into a Staff / Principal Research Engineer or early research-team lead as the company scales.
Interview Process
Step 1 – Short survey. After reviewing resumes, Hestus sends a short online questionnaire (≈15 minutes) to candidates who meet the basic requirements.
Step 2 – Initial call. A 30–45 minute video call over Google Meet to learn more about your background, interests, and what you’re looking for, and to share more about Hestus, the role, and how they work.
Step 3 – Take-home coding & ML exercise. A practical coding challenge (target ≤ 8 hours) focused on Python, problem-solving, and ML fundamentals relevant to Hestus’s work.
Step 4 – Onsite / virtual technical interviews. A series of deeper technical interviews with the CEO, CTO, and engineering team. Candidates in the Bay Area are invited to the San Mateo office; others join via Google Meet. These sessions cover ML system design, research thinking, and production engineering.
Step 5 – Offer & closing. For candidates who pass the loop, Hestus discusses compensation details, equity, and start date, and moves quickly to a formal offer.
Companies to Source From
These companies are similar to our client. Candidates with experience at these companies are seen as a big plus.
OpenAIopenai.com
Anthropicanthropic.com
DeepMinddeepmind.com
Meta FAIRai.facebook.com
Google Brainai.google
Microsoft Researchmicrosoft.com
Mistral AImistral.ai
NVIDIA Researchnvidia.com
xAIx.ai
Allen Institute for AIallenai.org
Jane Streetjanestreet.com
Citadelcitadel.com
Hudson River Tradinghudsonrivertrading.com
Two Sigmatwosigma.com
Renaissance Technologiesrentec.com
SIGsig.com
Jump Tradingjumptrading.com
Redwood Researchredwoodresearch.org
Conjectureconjecture.dev
Numentanumenta.com
Client Messaging Channel
Client Messaging Channel
Please sign in and apply for this bounty to gain access to the messaging channel.
Login & Apply to View More
Sign in to your account to access full job details and apply.
Relocation & Sponsorship
Relocation Assistance
Visa Sponsorship
Must-Have Requirements
Education-first bar: PhD in ML/CS/Stats (or adjacent) from a top university OR equivalent research pedigree from OpenAI/Anthropic/DeepMind/FAIR/Mistral/NVIDIA Research/MSR/Google Brain or Jane Street/Citadel/HRT/Two Sigma/Renaissance.
Hands-on experience training models end-to-end (data pipelines, training loops, evaluation). Not just using hosted APIs.
Strong ML fundamentals and experimental rigor (can design and interpret research-grade experiments).
Expert PyTorch or JAX experience; comfortable implementing custom architectures and training code.
Able to work full-time, fully in-person in NYC (relocation OK).
Screening Questions
1. (Optional Video). This step is completely optional. If you’d like, record a short 2–3 minute video introducing yourself and your experience — or share a recording of your interview with the recruiter if that’s easier. You can upload the link via Loom or Google Drive. This just helps us get to know you better, but there’s no pressure if you’d prefer to skip it.
2. (Optional Portfolio / GitHub / Google Scholar) If available, please share a link to your GitHub, portfolio, Google Scholar/arXiv page, or any recent projects or papers you’ve worked on. This is entirely optional but helps provide more context about your work and research interests.
3. Tell us about an ML project where you trained or significantly improved a model beyond using an off-the-shelf API. What was the problem, how did you design the model and training pipeline, and what measurable impact did your work have (e.g., accuracy, robustness, speed, or business / user impact)?
Benefits
Retirement/401k
Health Insurance
Vision Insurance
Dental Insurance
Ideally US Citizen, willing to sponsor TN/E3 Visa
Early, meaningful equity at founding-team level (0.10%–1.00%).
Fully in-person research environment with tight feedback loops and direct founder access.
Opportunity to work on problems spanning trading, forecasting, and scientific discovery.
Login & Apply to View More
Sign in to your account to access full job details and apply.
Companies to Source From
These companies are similar to our client. Candidates with experience at these companies are seen as a big plus.
OpenAIopenai.com
Anthropicanthropic.com
DeepMinddeepmind.com
Meta FAIRai.facebook.com
Google Brainai.google
Microsoft Researchmicrosoft.com
Mistral AImistral.ai
NVIDIA Researchnvidia.com
xAIx.ai
Allen Institute for AIallenai.org
Jane Streetjanestreet.com
Citadelcitadel.com
Hudson River Tradinghudsonrivertrading.com
Two Sigmatwosigma.com
Renaissance Technologiesrentec.com
SIGsig.com
Jump Tradingjumptrading.com
Redwood Researchredwoodresearch.org
Conjectureconjecture.dev
Numentanumenta.com
Founding ML Research Engineer — PhD (Top ML program) or Top Lab/Trading (NYC, in-person) w/ .10%-1.00% Equity - Bounty Position
Company: Zoa Research
Location: New York City – fully in-person. Must be willing to relocate to NYC; Zoa can sponsor TN / E-3 visas.
Employment Type: Full-Time
Salary: $150,000 - $400,000 per year
Bounty Amount: $15,000 - 25,000 (25k cap)
Key Responsibilities: Design & Train ML Models: Develop large-scale ML models focused on quantitative forecasting, particularly for trading, exploring new architectures and objectives. Experiment Execution: Conduct end-to-end experiments, analyze results, and iterate; work closely with founders to materialize research ideas into production systems. Build Research Infrastructure: Develop the toolchain for training, evaluating, and deploying forecasting models while actively contributing to...