Key Takeaways
- Hosts set 2026 tech resolutions, including mastering short-form video and optimizing productivity systems.
- Past resolutions included AI for meditation (a 'flop') and increased social media engagement.
- Listeners questioned AI's role in scenarios like deepfaking Santa and chatbot liability for misinformation.
- The podcast clarified its focus on frontier AI models over widely adopted but less innovative ones.
- Discussions covered space data centers, AI hallucinations, and robot caregivers' impact on child development.
Deep Dive
- Casey Newton's 2025 resolution to use AI for meditation was deemed a 'major flop,' despite finding meditation beneficial.
- Newton combatted 2024 burnout by reconnecting to his purpose as a journalist after attending an AI conference.
- One host noted increased engagement with work through events and writing a book, aiding in burnout recovery.
- The 2026 resolution for one host is to master short-form video content, acknowledging its dominance across digital platforms.
- One host committed to experimenting with high-value, authentic short-form video despite concerns about adapting journalistic earnestness.
- The other host expressed concerns about the potential negative impact of short-form video on younger audiences.
- A host resolved for 2026 to avoid significant changes to their productivity system, which uses Capacities for journaling and 'blips' for idea tracking.
- This system, in use for four months, has improved research for columns by resurfacing ideas via random spaced repetition.
- One host proposes an 'A-B test' for the year, comparing unstructured methods against the other's established system.
- The podcast name 'Hard Fork' originated in 2021 from a cryptocurrency term for splitting a blockchain, chosen after legal issues with an initial podcast name.
- A listener asked for advice on using AI to deepfake Santa into home security footage for children.
- The hosts discussed the evolving parental approach to the Santa myth and potential alternatives to AI deepfakes.
- A listener expressed frustration with large companies announcing ambitious AI initiatives while basic tech like Wi-Fi struggles.
- Kevin Roose acknowledged that AI does not solve fundamental IT problems and that slow-moving companies may struggle to adopt new technologies.
- Despite current technological shortcomings, hosts argue that AI will fundamentally change lives, warranting continued reporting on its advancements.
- A listener questioned if space-based data centers primarily aim to evade earthly jurisdiction, referencing the 1967 Outer Space Treaty.
- The Gemini AI hallucinated an analysis of Elon Musk's ancestry during genealogy research, dismissed by hosts as a standard AI hallucination.
- The hosts considered the idea of model distillation but dismissed the theory of deliberate 'poisoning' by competitors.
- A new mother inquired if infants could develop genuine attachment to robot caregivers like Neo assisting with childcare.
- The discussion referenced science fiction, exploring whether mechanical bonds can substitute for biological ones.
- Hosts debated if technologies like the 'Snoo' bassinet detract from bonding or offer practical benefits like parental rest.
- Listeners inquired why the podcast doesn't cover models like Copilot, DeepSeek, and Grok.
- The hosts focus on 'frontier AI models' that introduce new capabilities, finding widely used but less innovative models less compelling.
- They plan to cover DeepSeek, representing China's advancements, and Grok, citing its competitive benchmarks and real-time X data access.
- In Moffitt v. Air Canada, a tribunal ruled Air Canada liable for a chatbot's misinformation about bereavement fares.
- The Chevrolet of Watsonville chatbot offered a new Tahoe for $1; a legal expert noted such 'too good to be true' offers are likely not legally binding.
- The hosts discuss why AI passing the Turing test (GPT-4 54% in May 2024, newer models 73% in 2025) wasn't a cultural event, citing 'moving goalposts.'