If you believe everything you read, you are probably quite worried about the prospect of a superintelligent, killer AI. The Guardian, a British newspaper, warned recently that “we’re like children playing with a bomb,” and a recent Newsweek headline reads, “Artificial Intelligence Is Coming, and It Could Wipe Us Out.”

Numerous such headlines, fueled by comments from as the likes of Elon Musk and Stephen Hawking, are strongly influenced by the work of one man: professor Nick Bostrom, author of the philosophical treatise Superintelligence: Paths, Dangers, and Strategies.

Bostrom is an Oxford philosopher, but quantitative assessment of risks is the province of actuarial science. He may be dubbed the world’s first prominent “actuarial philosopher,” though the term seems an oxymoron given that philosophy is an arena for conceptual arguments, and risk assessment is a data-driven statistical exercise.

So what do the data say? Bostrom aggregates the results of four different surveys of groups such as participants in a conference called “Philosophy and Theory of AI,” held in 2011 in Thessaloniki, Greece, and members of the Greek Association for Artificial Intelligence (he does not provide response rates or the phrasing of questions, and he does not account for the reliance on data collected in Greece).

To read more, click here.