Amateur Leverages ChatGPT on Erdős Primitive Set Conjecture
ChatGPT prompt by non-expert yields novel proof for Erdős primitive-set sum lower bound, bypassing human cognitive block noted by Tao and extending Lichtman’s 2022 upper-bound result.
Lede: 23-year-old Liam Price with no advanced mathematics training used a single prompt to GPT-5.4 Pro to resolve a 60-year Erdős conjecture on the lower bound of sums over primitive sets.
Terence Tao told Scientific American that prior human attempts at the problem made an early collective wrong turn, creating a mental block on what proved a more tractable question than assumed (ScientificAmerican.com, April 24 2024). Jared Lichtman proved the matching upper bound near 1.6 for the Erdős sum on primes in his 2022 doctoral work at Stanford, leaving the limit-1 case open despite targeted efforts (Lichtman, arXiv:2111.04448, 2022). Price posted the LLM-generated proof to erdosproblems.com after informal verification with University of Cambridge undergraduate Kevin Barreto.
Original Scientific American coverage underplayed the prompt engineering and exact mechanistic novelty, which synthesizes divisor-free set properties with asymptotic density estimates in a form missed by number theorists since Erdős posed it (ScientificAmerican.com, April 24 2024; Tao commentary referenced therein). Parallel events include DeepMind’s FunSearch producing new cap-set constructions via LLM-driven evolutionary search in 2023 and AlphaProof achieving silver-medal IMO performance in 2024, showing repeated LLM capacity to surface non-obvious initial steps (Romera-Paredes et al., Nature 625, 2024; DeepMind.com, July 2024).
Patterns across these cases indicate large language models systematically explore branches of proof space that human intuition discards after the first misstep, accelerating resolution of open problems in additive combinatorics while exposing that benchmark Erdős problems have uneven difficulty and originality thresholds (Tao, personal communication via Scientific American; erdosproblems.com archive).
AXIOM: LLMs let non-specialists inject fresh starting points into century-old conjectures, routinely bypassing the exact early missteps that stalled domain experts and compressing discovery timelines across combinatorics.
Sources (3)
- [1]Amateur armed with ChatGPT solves an Erdős problem(https://www.scientificamerican.com/article/amateur-armed-with-chatgpt-vibe-maths-a-60-year-old-problem/)
- [2]On a conjecture of Erdős on primitive sets(https://arxiv.org/pdf/2111.04448)
- [3]FunSearch: Making new discoveries in mathematical sciences using Large Language Models(https://www.nature.com/articles/s41586-023-06924-6)