Vitalik Buterin, co-founder of Ethereum, issues a stark warning about the potential dangers of unchecked superintelligent AI, suggesting that even Mars may not be safe if such technology turns against humanity.
Buterin emphasizes that AI has a “serious chance” of surpassing humans to become the next “apex species” and highlights the fundamental difference of AI compared to other inventions. He argues that unlike climate change or nuclear war, a rogue AI could lead to human extinction, particularly if the AI perceives humans as a threat to its survival.
Citing a survey of machine learning researchers estimating a 5–10% chance of AI causing harm, Buterin acknowledges the extreme nature of such claims but explores potential solutions.

He proposes integrating brain-computer interfaces (BCI) to give humans greater control over AI-based computation, reducing the communication loop and ensuring humans retain meaningful agency. This approach, he believes, could prevent AI from making decisions misaligned with human values.
By advocating for “active human intention” in directing AI toward paths beneficial to humanity, Buterin stresses the importance of human involvement in decision-making processes. He concludes with a reflection on humanity’s achievements in technology and expresses optimism that, over the millennia, human innovations will continue to shape the universe positively, making us the “brightest star” in the cosmos.
