Founding ML Research Engineer (Quant Trading & Scientific Forecasting, YC Startup) w/ 0.10%–1.00% Equity
Bounty Amount: $15,000 - 25,000 (25k cap)
Company Name: Zoa Research
Role Type: Full-Time
Location: New York City – fully in-person. Must be willing to relocate to NYC; Zoa can sponsor TN / E-3 visas.
Salary / Hourly Rate: $150,000 - $400,000 per year
Benefits: Ideally US Citizen, willing to sponsor TN/E3 Visa,Early, meaningful equity at founding-team level (0.10%–1.00%).,Fully in-person research environment with tight feedback loops and direct founder access.,Opportunity to work on problems spanning trading, forecasting, and scientific discovery.
Role Information
Role Overview: N/A
Responsibilities: • Over time, help define Zoa’s roadmap for using forecasting models beyond trading, including applications in scientific discovery and other complex domains.
Qualifications: Deep ML expertise – 4+ years of hands-on machine-learning experience (or equivalent research / thesis-based Master’s or PhD), with a track record of training and improving deep models, not just using pre-built APIs., Strong Python engineering – you write clean, well-structured, production-ready Python without handholding, including tests, documentation, and thoughtful abstractions., Modern deep learning frameworks – expert with PyTorch (preferred) or similar frameworks such as TensorFlow / JAX; comfortable implementing and modifying custom architectures, loss functions, and training loops., End-to-end ownership – experience owning ML systems from data to deployment: building training pipelines, running experiments at scale, tuning hyperparameters, and shipping models into real products., Applied problem-solving – proven ability to take messy, open-ended product requirements and turn them into concrete ML formulations, experiments, and shipped features., Collaboration & communication – able to work closely with founders, engineers, and (eventually) customers; can explain trade-offs and model behavior clearly to both technical and non-technical partners., Startup mindset – comfortable in a fast-moving, low-process environment; willing to wear multiple hats across research, engineering, and backend work when needed., Nice to have – experience with geometry / graphics / CAD, 3D representations, or robotics; familiarity with cloud ML platforms (AWS / GCP) and backend frameworks (Flask, FastAPI, Django).
Minimum Requirements: Solid ML research experience training modern deep models (industry or thesis-driven academic work).,Strong Python skills and experience with at least one deep learning framework (PyTorch, JAX, or TensorFlow).,Comfort working on noisy forecasting / time-series style problems and designing experiments around them.,Excited to work full-time, in-person from New York City on a small, high-ownership ML research team.
Screening Questions: (Optional Video). This step is completely optional. If you’d like, record a short 2–3 minute video introducing yourself and your experience — or share a recording of your interview with the recruiter if that’s easier. You can upload the link via Loom or Google Drive. This just helps us get to know you better, but there’s no pressure if you’d prefer to skip it.,(Optional Portfolio / GitHub / Google Scholar) If available, please share a link to your GitHub, portfolio, Google Scholar/arXiv page, or any recent projects or papers you’ve worked on. This is entirely optional but helps provide more context about your work and research interests.,Tell us about an ML project where you trained or significantly improved a model beyond using an off-the-shelf API. What was the problem, how did you design the model and training pipeline, and what measurable impact did your work have (e.g., accuracy, robustness, speed, or business / user impact)?
Company Information
About Company: N/A
Culture: N/A
Additional Information
Interview Process: Step 1 – Short survey. After reviewing resumes, Hestus sends a short online questionnaire (≈15 minutes) to candidates who meet the basic requirements., Step 2 – Initial call. A 30–45 minute video call over Google Meet to learn more about your background, interests, and what you’re looking for, and to share more about Hestus, the role, and how they work., Step 3 – Take-home coding & ML exercise. A practical coding challenge (target ≤ 8 hours) focused on Python, problem-solving, and ML fundamentals relevant to Hestus’s work., Step 4 – Onsite / virtual technical interviews. A series of deeper technical interviews with the CEO, CTO, and engineering team. Candidates in the Bay Area are invited to the San Mateo office; others join via Google Meet. These sessions cover ML system design, research thinking, and production engineering., Step 5 – Offer & closing. For candidates who pass the loop, Hestus discusses compensation details, equity, and start date, and moves quickly to a formal offer.
Day to day: You’ll spend most of your time doing end-to-end machine-learning work on hard forecasting problems. A typical week blends: designing new model architectures and training runs in Python/JAX/PyTorch, cleaning and curating cross-domain datasets, and building evaluation pipelines to understand what’s actually improving forecast quality. Some days you’ll be reproducing or extending recent research results; other days you’ll be shipping small productized models or tools that other researchers at Zoa can use. You’ll own experiments from idea → implementation → analysis, and you’ll write clean, production-oriented code so that promising ideas can graduate into the core forecasting engine.
Team: You’ll report directly to the founders / research leads and work closely with a small, high-caliber team of ML engineers and quantitative researchers. The team is flat and highly collaborative – everyone contributes to ideas, code, and experiments. You’ll often pair with a founder to scope projects, choose modeling approaches, and set success criteria. Because the company is still very early, you’ll have a real seat at the table on decisions about research direction, infrastructure, and how Zoa’s forecasting models are exposed to customers.
Growth: Shape Zoa’s research roadmap for cross-domain forecasting and model evaluation., Raise the technical bar for future ML hires and mentor more junior researchers and engineers., Grow into a Staff / Principal Research Engineer or early research-team lead as the company scales.
Ideal Candidate Profile: Fundamentals-driven ML researcher – You have serious depth in machine learning and statistics, care about theoretical grounding, and enjoy thinking about why models work (or don’t) as much as making the charts go up.Production-minded engineer – You write clean, well-structured Python, are comfortable with modern deep learning frameworks (JAX / PyTorch or similar), and take pride in building training and evaluation code that other people can rely on.End-to-end owner – You like owning messy, open-ended modeling problems: defining the question, selecting data, designing experiments, tuning hyperparameters, and turning the best ideas into robust systems that can run at scale.Curious, rigorous collaborator – You enjoy reading papers, proposing new approaches, and debating trade-offs with other strong researchers. You can clearly explain modeling choices and results to technical peers and non-ML stakeholders.Excited about forecasting & decision-making – You’re motivated by building models that actually help people reason better about the future, not just chasing benchmark scores. You’d be happy to spend the next several years pushing the frontier of general-purpose forecasting models.
Founding ML Research Engineer (Quant Trading & Scientific Forecasting, YC Startup) w/ 0.10%–1.00% Equity
New York City – fully in-person. Must be willing to relocate to NYC; Zoa can sponsor TN / E-3 visas.
Full-Time
Est. Fee
$15,000 - 25,000 (25k cap)
Salary Range
$150,000 - $400,000 per year
Contract
10% of Salary, 60 day guarantee
Job Details
Job Overview
Key Responsibilities:
Design & Train ML Models: Develop large-scale ML models focused on quantitative forecasting, particularly for trading, exploring new architectures and objectives.
Experiment Execution: Conduct end-to-end experiments, analyze results, and iterate; work closely with founders to materialize research ideas into production systems.
Build Research Infrastructure: Develop the toolchain for training, evaluating, and deploying forecasting models while actively contributing to...
Responsibilities
This is a founding ML research role at Zoa. You’ll be part of a tiny team building the core quantitative models that power Zoa’s trading and long-term scientific forecasting engine. You’ll have substantial ownership over research direction, modeling choices, and how ideas turn into production systems.
What you’ll do
• Design and train large-scale ML models for quantitative forecasting, with an initial focus on trading.
• Explore new architectures, objectives, and data-processing pipelines to push model performance beyond existing domain-specific approaches.
• Run end-to-end experiments: define hypotheses, design and execute experiments, analyze results, and iterate quickly.
• Work closely with founders to translate vague, open-ended questions into concrete research directions and measurable outcomes.
• Help build the research and engineering toolchain for training, evaluation, and deployment of forecasting models.
• Contribute to research culture at Zoa: reading and discussing recent papers, sharing findings, and setting a high bar for rigor and reproducibility.
• Over time, help define Zoa’s roadmap for using forecasting models beyond trading, including applications in scientific discovery and other complex domains.
Qualifications
We’re looking for a senior, fundamentals-strong ML engineer who is as comfortable reading papers and inventing new approaches as they are writing clean, production-ready Python. The right person has hands-on experience training models (not just calling hosted APIs), understands modern deep learning frameworks like PyTorch inside-out, and can reason about architecture, data, and evaluation trade-offs. You should be able to independently drive projects from idea to shipped feature, collaborate well with a small, high-caliber team, and thrive in an early-stage startup environment where requirements are ambiguous, ownership is high, and the bar for code quality and rigor is serious.
Deep ML expertise – 4+ years of hands-on machine-learning experience (or equivalent research / thesis-based Master’s or PhD), with a track record of training and improving deep models, not just using pre-built APIs.
Strong Python engineering – you write clean, well-structured, production-ready Python without handholding, including tests, documentation, and thoughtful abstractions.
Modern deep learning frameworks – expert with PyTorch (preferred) or similar frameworks such as TensorFlow / JAX; comfortable implementing and modifying custom architectures, loss functions, and training loops.
End-to-end ownership – experience owning ML systems from data to deployment: building training pipelines, running experiments at scale, tuning hyperparameters, and shipping models into real products.
Applied problem-solving – proven ability to take messy, open-ended product requirements and turn them into concrete ML formulations, experiments, and shipped features.
Collaboration & communication – able to work closely with founders, engineers, and (eventually) customers; can explain trade-offs and model behavior clearly to both technical and non-technical partners.
Startup mindset – comfortable in a fast-moving, low-process environment; willing to wear multiple hats across research, engineering, and backend work when needed.
Nice to have – experience with geometry / graphics / CAD, 3D representations, or robotics; familiarity with cloud ML platforms (AWS / GCP) and backend frameworks (Flask, FastAPI, Django).
Ideal Candidate
Fundamentals-driven ML researcher – You have serious depth in machine learning and statistics, care about theoretical grounding, and enjoy thinking about why models work (or don’t) as much as making the charts go up.
Production-minded engineer – You write clean, well-structured Python, are comfortable with modern deep learning frameworks (JAX / PyTorch or similar), and take pride in building training and evaluation code that other people can rely on.
End-to-end owner – You like owning messy, open-ended modeling problems: defining the question, selecting data, designing experiments, tuning hyperparameters, and turning the best ideas into robust systems that can run at scale.
Curious, rigorous collaborator – You enjoy reading papers, proposing new approaches, and debating trade-offs with other strong researchers. You can clearly explain modeling choices and results to technical peers and non-ML stakeholders.
Excited about forecasting & decision-making – You’re motivated by building models that actually help people reason better about the future, not just chasing benchmark scores. You’d be happy to spend the next several years pushing the frontier of general-purpose forecasting models.
Must-Have Requirements
Solid ML research experience training modern deep models (industry or thesis-driven academic work).
Strong Python skills and experience with at least one deep learning framework (PyTorch, JAX, or TensorFlow).
Comfort working on noisy forecasting / time-series style problems and designing experiments around them.
Excited to work full-time, in-person from New York City on a small, high-ownership ML research team.
Screening Questions
1. (Optional Video). This step is completely optional. If you’d like, record a short 2–3 minute video introducing yourself and your experience — or share a recording of your interview with the recruiter if that’s easier. You can upload the link via Loom or Google Drive. This just helps us get to know you better, but there’s no pressure if you’d prefer to skip it.
2. (Optional Portfolio / GitHub / Google Scholar) If available, please share a link to your GitHub, portfolio, Google Scholar/arXiv page, or any recent projects or papers you’ve worked on. This is entirely optional but helps provide more context about your work and research interests.
3. Tell us about an ML project where you trained or significantly improved a model beyond using an off-the-shelf API. What was the problem, how did you design the model and training pipeline, and what measurable impact did your work have (e.g., accuracy, robustness, speed, or business / user impact)?
About the Company
Company Overview
Company Size: 1–10 Employees (team of ~5 today)
Industry: Quantitative research, AI & machine learning, systematic trading
Zoa Research is building powerful, cross-domain quantitative forecasting models, starting from a trading lab that’s already profitable. Historically, forecasting models have been narrow and hand-crafted – brilliant quants spend years tuning architectures inside a single domain. Zoa’s bet is that scale and better priors beat hand-tuned niche models: they train general event-forecasting engines that learn from data across markets and contexts, and use inference-time compute and multi-agent optimization loops to continually improve their predictions.
The long-term vision is to become the default forecasting engine for the real economy – from supply-chain and energy to climate and catastrophic-risk modeling – by dramatically improving how institutions reason about uncertainty and choose experiments. If “science is the taming of chance,” Zoa wants to be the infrastructure layer that helps labs, investors, and operators tame chance at scale.
Company Culture
Zoa is a small, research-heavy team in New York, founded by Greg Volynsky (Harvard Law) and Sam Damashek (ex-Jane Street options desk, CMU CS). The culture blends the rigor of a top quant trading firm with the curiosity of an academic lab: people write proofs and experiments, but they also ship models that make and lose real money every day.
Day-to-day, you’ll work closely with the founders and a handful of researchers on deep modeling problems, not wrapper work: designing inductive biases, building multi-agent optimization loops around models, and stress-testing them against real-world policies. The team values:
Truth-seeking over politics – clean experiments, careful reasoning, and honest error bars.
Ownership – every researcher is responsible for ideas and production impact, not just papers.
High bar, low ego – people are intense about the work but relaxed and kind with each other.
It’s the kind of place where you can spend years going very deep on forecasting, work directly with founders who’ve shipped high-stakes systems before, and still have a meaningful share in the upside if Zoa becomes the forecasting engine the rest of the world builds on.
Benefits
Retirement/401k
Health Insurance
Vision Insurance
Dental Insurance
Ideally US Citizen, willing to sponsor TN/E3 Visa
Early, meaningful equity at founding-team level (0.10%–1.00%).
Fully in-person research environment with tight feedback loops and direct founder access.
Opportunity to work on problems spanning trading, forecasting, and scientific discovery.
Relocation & Sponsorship
Relocation Assistance
Visa Sponsorship
What you can expect
Day to Day
You’ll spend most of your time doing end-to-end machine-learning work on hard forecasting problems. A typical week blends: designing new model architectures and training runs in Python/JAX/PyTorch, cleaning and curating cross-domain datasets, and building evaluation pipelines to understand what’s actually improving forecast quality. Some days you’ll be reproducing or extending recent research results; other days you’ll be shipping small productized models or tools that other researchers at Zoa can use. You’ll own experiments from idea → implementation → analysis, and you’ll write clean, production-oriented code so that promising ideas can graduate into the core forecasting engine.
Team
You’ll report directly to the founders / research leads and work closely with a small, high-caliber team of ML engineers and quantitative researchers. The team is flat and highly collaborative – everyone contributes to ideas, code, and experiments. You’ll often pair with a founder to scope projects, choose modeling approaches, and set success criteria. Because the company is still very early, you’ll have a real seat at the table on decisions about research direction, infrastructure, and how Zoa’s forecasting models are exposed to customers.
Growth
This is a founding-level ML / research role. In the near term you’ll help define Zoa’s core modeling stack and become the go-to owner for one or more parts of the forecasting system (data pipelines, training infrastructure, evaluation, or specific model families). Over the next few years you’ll have the opportunity to:
Shape Zoa’s research roadmap for cross-domain forecasting and model evaluation.
Raise the technical bar for future ML hires and mentor more junior researchers and engineers.
Grow into a Staff / Principal Research Engineer or early research-team lead as the company scales.
Interview Process
Step 1 – Short survey. After reviewing resumes, Hestus sends a short online questionnaire (≈15 minutes) to candidates who meet the basic requirements.
Step 2 – Initial call. A 30–45 minute video call over Google Meet to learn more about your background, interests, and what you’re looking for, and to share more about Hestus, the role, and how they work.
Step 3 – Take-home coding & ML exercise. A practical coding challenge (target ≤ 8 hours) focused on Python, problem-solving, and ML fundamentals relevant to Hestus’s work.
Step 4 – Onsite / virtual technical interviews. A series of deeper technical interviews with the CEO, CTO, and engineering team. Candidates in the Bay Area are invited to the San Mateo office; others join via Google Meet. These sessions cover ML system design, research thinking, and production engineering.
Step 5 – Offer & closing. For candidates who pass the loop, Hestus discusses compensation details, equity, and start date, and moves quickly to a formal offer.
Companies to Source From
These companies are similar to our client. Candidates with experience at these companies are seen as a big plus.
Openaiopenai.com
Anthropicanthropic.com
Coherecohere.com
Mistralmistral.ai
Googlegoogle.com
Deepminddeepmind.com
Nvidianvidia.com
Adeptadept.ai
Inflectioninflection.ai
Huggingfacehuggingface.co
Figmafigma.com
Lovablelovable.ai
Cruisecruise.com
Stabilitystability.ai
Databricksdatabricks.com
Two-sigmatwo-sigma.com
Janestreetjanestreet.com
Hudsonrivertradinghudsonrivertrading.com
Jumptradingjumptrading.com
Citadelcitadel.com
Sigsig.com
Redwoodresearchredwoodresearch.org
Conjectureconjecture.dev
Numentanumenta.com
Openphilanthropyopenphilanthropy.org
Manifoldmanifold.markets
Client Messaging Channel
Client Messaging Channel
Please sign in and apply for this bounty to gain access to the messaging channel.
Login & Apply to View More
Sign in to your account to access full job details and apply.
Relocation & Sponsorship
Relocation Assistance
Visa Sponsorship
Must-Have Requirements
Solid ML research experience training modern deep models (industry or thesis-driven academic work).
Strong Python skills and experience with at least one deep learning framework (PyTorch, JAX, or TensorFlow).
Comfort working on noisy forecasting / time-series style problems and designing experiments around them.
Excited to work full-time, in-person from New York City on a small, high-ownership ML research team.
Screening Questions
1. (Optional Video). This step is completely optional. If you’d like, record a short 2–3 minute video introducing yourself and your experience — or share a recording of your interview with the recruiter if that’s easier. You can upload the link via Loom or Google Drive. This just helps us get to know you better, but there’s no pressure if you’d prefer to skip it.
2. (Optional Portfolio / GitHub / Google Scholar) If available, please share a link to your GitHub, portfolio, Google Scholar/arXiv page, or any recent projects or papers you’ve worked on. This is entirely optional but helps provide more context about your work and research interests.
3. Tell us about an ML project where you trained or significantly improved a model beyond using an off-the-shelf API. What was the problem, how did you design the model and training pipeline, and what measurable impact did your work have (e.g., accuracy, robustness, speed, or business / user impact)?
Benefits
Retirement/401k
Health Insurance
Vision Insurance
Dental Insurance
Ideally US Citizen, willing to sponsor TN/E3 Visa
Early, meaningful equity at founding-team level (0.10%–1.00%).
Fully in-person research environment with tight feedback loops and direct founder access.
Opportunity to work on problems spanning trading, forecasting, and scientific discovery.
Login & Apply to View More
Sign in to your account to access full job details and apply.
Companies to Source From
These companies are similar to our client. Candidates with experience at these companies are seen as a big plus.
Openaiopenai.com
Anthropicanthropic.com
Coherecohere.com
Mistralmistral.ai
Googlegoogle.com
Deepminddeepmind.com
Nvidianvidia.com
Adeptadept.ai
Inflectioninflection.ai
Huggingfacehuggingface.co
Figmafigma.com
Lovablelovable.ai
Cruisecruise.com
Stabilitystability.ai
Databricksdatabricks.com
Two-sigmatwo-sigma.com
Janestreetjanestreet.com
Hudsonrivertradinghudsonrivertrading.com
Jumptradingjumptrading.com
Citadelcitadel.com
Sigsig.com
Redwoodresearchredwoodresearch.org
Conjectureconjecture.dev
Numentanumenta.com
Openphilanthropyopenphilanthropy.org
Manifoldmanifold.markets
Founding ML Research Engineer (Quant Trading & Scientific Forecasting, YC Startup) w/ 0.10%–1.00% Equity - Bounty Position
Company: Zoa Research
Location: New York City – fully in-person. Must be willing to relocate to NYC; Zoa can sponsor TN / E-3 visas.
Employment Type: Full-Time
Salary: $150,000 - $400,000 per year
Bounty Amount: $15,000 - 25,000 (25k cap)
Key Responsibilities: Design & Train ML Models: Develop large-scale ML models focused on quantitative forecasting, particularly for trading, exploring new architectures and objectives. Experiment Execution: Conduct end-to-end experiments, analyze results, and iterate; work closely with founders to materialize research ideas into production systems. Build Research Infrastructure: Develop the toolchain for training, evaluating, and deploying forecasting models while actively contributing to...