Key Takeaways
- Periodic Labs aims to automate scientific discovery, particularly in physics and chemistry, using AI.
- Current AI models, trained primarily on digital data, lack the real-world experimentation necessary for scientific breakthroughs.
- Periodic Labs integrates physical lab results as reward functions for AI agents, moving beyond theoretical models.
- Scaling laws, successful for language models, are insufficient for complex scientific problems like cracking physics.
- Interdisciplinary collaboration between AI and physical scientists is crucial for advancing hard sciences.
Deep Dive
- Liam Fedus and Ekin Dogus Cubuk, co-founders of Periodic Labs, combine backgrounds from OpenAI and Google DeepMind, respectively, to focus on scientific discovery.
- Periodic Labs functions as a frontier AI research lab, utilizing Large Language Models (LLMs), simulations, and real-world experiments.
- The company aims to create physically grounded reward functions for AI optimization by integrating experimentation into the AI learning loop.
- AI agents at Periodic Labs will use quantum mechanics and solid-state physics tools to perform experiments in a physical lab.
- The initial focus is on probing the quantum mechanical energy scale through powder synthesis to discover new materials, such as superconductors and magnets.
- Early language models, including ChatGPT, were trained using supervised data and reinforcement learning against human preferences, leading to limitations in mathematical reasoning due to imprecise reward functions.
- Current AI models struggle with scientific discovery because they are primarily trained on digital data and lack an iterative experimental process.
- These models cannot collapse epistemic uncertainty or learn from noisy or negative experimental results, preventing deeper scientific understanding.
- True scientific advancement requires acting in the real world and iterating through experimentation, a capability missing in today's AI systems.
- The common AI concept of a 'lab' differs from Periodic Labs' physical laboratory, which explores physics at the quantum mechanical energy scale.
- Periodic Labs' initial targets are superconductivity and magnetism, chosen for their impactful sub-goals and fundamental scientific interest.
- The pursuit of high-temperature superconductivity is a 'North Star' with goals like autonomous synthesis and characterization, potentially exceeding the current 135 Kelvin record.
- These domains are technically advantageous due to their robustness to simulation uncertainties related to microstructure.
- Achieving AI scientists involves completing the full scientific loop—experimentation, simulation, data analysis—in these specific domains.
- Measuring progress includes synthesizing high-temperature superconductors and directly measuring material properties like ductility and toughness.
- The success of scaling laws in language models, from GPT-1 to GPT-5 Pro, predicts emergent capabilities with increased data and compute.
- However, this scaling approach has limitations for cracking complex physics problems, requiring a different methodology.
- The 'Y-axis' or performance metric for scientific discovery differs significantly from general internet data distributions.
- General knowledge is insufficient for specialized scientific breakthroughs; AI must optimize against the specific data distribution of the problem.
- Scaling laws in vision models show power-law gains in-domain, but out-of-domain performance often requires centuries of improvement.
- Periodic Labs employs a team comprising roughly half Machine Learning (ML) scientists and half physical scientists.
- Collaboration is fostered through weekly teaching sessions, encouraging an open culture where all questions are welcomed.
- ML researchers learn scientific domains while physical scientists teach LLMs reasoning about physics and chemistry.
- This approach bridges the knowledge gap, mirroring the necessity of interdisciplinary expertise for complex discoveries like superconductors.
- No single human is likely to possess the combined expertise in physics, chemistry, and synthesis required for complex scientific breakthroughs.
- The company culture emphasizes open learning, with computer scientists translating scientific concepts into API-like structures.
- Individuals with cross-disciplinary experience serve as bridges between ML and physical science teams.
- Joining Periodic Labs does not require an advanced degree in physics or chemistry, emphasizing curiosity and pragmatism.
- The primary differentiator for researchers is a strong mission focus on accelerating science.
- Periodic Labs seeks candidates who are deeply curious, pragmatic, solution-oriented, world-class in their fields, and possess a strong sense of urgency.
- Periodic Labs focuses on deploying AI technology in mission-critical industries such as space, defense, and advanced manufacturing.
- The deployment strategy involves identifying specific customer bottlenecks and mapping Periodic's capabilities to solve well-defined problems.
- Customer needs include automating complex simulations for product development and integrating data and processes across design pipelines.
- The concept of 'mid-training' injects new, real-time knowledge into AI models beyond their initial pre-training cutoff, enhancing performance in specific scientific domains.
- Pre-training encodes knowledge directly into AI weights, offering deeper understanding than simple retrieval systems for complex materials.
- The university ecosystem is crucial for advancing physical sciences, developing simulation tooling, and novel synthesis methods.
- Periodic Labs is establishing an advisory board with experts in superconductivity, solid-state chemistry, and physics to align with long-term research.
- Notable advisors include ZX Chen, Steve Kylson, Mercury Canadsides, Chris Wolverton, and Kostia Novoselov.
- Periodic Labs is launching a grant program to support academic research relevant to LLMs, agents, synthesis, materials discovery, and physics modeling.
- AI models can benefit from integrating geometric reasoning capabilities (e.g., equivariant graph neural networks, diffusion models) to complement linguistic strengths.