Nonprofit • Open Science

Advancing LLM Systems & AI for Science to lift human quality of life

S⁶ is a research salon and fellowship for top-tier engineers transitioning into scientific roles. We build methods, systems, and benchmarks that make scientific discovery faster, safer, and more reliable.

Mission

We aim to raise the floor of human well‑being to a next‑century baseline by accelerating scientific capability with modern AI, especially large language models. We pursue open research, reproducible artifacts, and practical systems that compound across the scientific stack.

  • • Open science by default with responsible short embargoes for sponsored work.
  • • Reproducible code, datasets, and benchmarks with artifact evaluation.
  • • Training the next wave of engineer‑scientists through fellowships and mentorship.

Focus Areas

LLM Systems

Evaluation, drift detection, unlearning, data pipelines, and scalable serving.

AI for Science

Agentic workflows, lab‑grade reasoning, and domain‑grounded modeling.

Benchmarks & Datasets

Open, fair, and durable tests that reflect real scientific tasks.

Safety & Governance

Transparent reporting, bias/robustness audits, and reproducibility standards.

Current Research (Openings)

Our projects explore the frontier between large language models, scientific reasoning, and new forms of machine understanding. Fellows and collaborators contribute to open, high-impact efforts spanning systems, modeling, and evaluation.

Figure Understanding for AI Agents

Integrating diagram and figure comprehension into LLM-based research agents. The goal is to enable retrieval and reasoning over scientific figures—so agents can interpret, reference, and cite visual evidence as naturally as text.

LLM Systems · Image Understanding · RAG

Diffusion-Transformer DNA Models

Developing compact diffusion-transformer (DiT) architectures for biological sequences. We study whether small-scale DiTs (4–20M parameters) can rival current hybrid transformer-SSM models for DNA representation learning.

Diffusion Models · Biological Foundation Models

Unlearning as Discovery Benchmark

Studying unlearning as a probe for knowledge regeneration—first removing a concept from a model, then testing whether it can rediscover it through reasoning. A position paper was presented at NeurIPS AI4Science 2025.

Evaluation · Knowledge Dynamics

Propose Your Own Research

Have an idea that aligns with our mission? We welcome proposals from fellows and collaborators. If the topic is beyond our core expertise, we’ll do our best to connect you with someone who can help.

Programs

Designed for elite engineers ready to transition into scientific research. Expect weekly paper/prototype cadence, strong mentorship, and public artifacts. Funding status: early stage; current cohorts may be unpaid while we build partnerships and donors.

Research Fellowship

Full‑time, 6–12 months. Co‑mentored projects targeting papers, benchmarks, and open‑source releases.

  • • Weekly research sprints and demos
  • • Publication and artifact‑review track
  • • Placement support into labs/industry R&D
Apply

Part‑time Fellowship

12 months at ~20+ hrs/week. Ideal for engineers exploring the research path while employed or transitioning.

  • • Focused milestones and mentorship
  • • Open research contributions
  • • Flexible scheduling
Apply

Why Now

1) Quality of life compounds with science

Life today is incomparably better than in centuries past—thanks to technology rooted in basic science. Few of us would choose a world without modern medicine, climate control, or global communication. Our premise is simple: when science advances, daily life follows.

2) Capability, not just effort, sets the ceiling

No amount of willpower or wealth can overcome missing scientific capability. History is full of examples—from quests for longevity to cures that society simply wasn’t yet able to make. Our work acknowledges this: to raise the ceiling of what’s possible, we must push the frontier of methods and tools.

3) LLMs as force multipliers for science

Large language models can accelerate science by scaffolding reasoning, automating literature synthesis, orchestrating experiments, generating code and analyses, and stress‑testing hypotheses. We build systems and evaluations that make these capabilities reliable, safe, and truly helpful to working scientists.

About S⁶

S⁶ follows the spirit of early open AI research organizations: open by default, research‑first, safety‑conscious. We focus on LLM systems, AI for Science, and foundational infrastructure that improves the reliability and utility of modern models in scientific workflows.

Founder

Robert Yang (Stanford BS with Distinction, Terman Scholar; MS in Computer Science, AI focus) founded S⁶ after personally transitioning from engineering to research. His interests span data drift, AI for science, ML systems, foundation models, and unlearning. Previously, he worked as a Research Scientist at a biology foundation model startup (Rosa Bio), and he built LLM model serving and infrastructure at scale (ex‑AWS).

Apply & Contact

Interested in a fellowship or in partnering as a mentor, donor, or sponsor? Send a short note with your background and goals.

This form is configured for static hosting via Netlify Forms. On other hosts, switch to mailto or a serverless endpoint.

Direct Email

Prefer email? Reach Robert directly:

When funding solidifies, we may introduce paid cohorts. Until then, fellowships operate as mentorship and open research collaborations.