What Existential Risks?
Let us be more specific to help the industry to grow.
Recently, there has been a lot of talk about AI posing existential risks. The discussion was given more credibility in the past week or so when prominent researchers such as Yoshua Benjio, Demis Hassabis, and Sam Altman signed a statement together.
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
I have a few things I want to talk about regarding this statement. I want to state that I very much agree that we need to mitigate the risk of AI, but to talk about the risk of extinction seems extreme here. In my opinion, AI does not pose an existential risk to humanity unless these researchers and philosophers saw something that we have not seen or heard about yet. My definition of existential risk, at least, will be something like a nuclear war that can very quickly decimate humanity.
So here’s the thing: if these researchers and philosophers have seen the existential risk that the general public or other researchers have not, then they should be more specific about what that existential risk is so that the whole of humanity can come together and work together to mitigate the risk. This will tie in well with their intention, which is to warn humanity about AI. Otherwise, it rings hollow to me, and I have to see these researchers through a different lens. Saying that there is an existential risk without being specific with details is akin to saying one has the first-mover advantage, but what is the specific advantage that comes with being the first-mover? No one knows…and since no one knows, how to go about protecting it?
This is, however, my own opinion. But I would love to hear yours, so do put them in the comments. :)
I like my subscribers to form their own opinion, and not take mine totally, so here are the articles I am referring to.
Here are the signatories and the statement made. <Centre For AI Safety>
Here is the article that inspired my thoughts on it. <The Conversation>
Past Issues





Frankly, that’s what I’ve been wondering as well. If there is evidence, bring it forward, so we can discuss and assess it. Otherwise, it becomes one of these “yet another alarm bell” headlines that one grows tired of after a while.