Where will High-Level Machine Intelligence take us?

The frightening thing to many is the anticipation and impact of High-Level Machine Intelligence (HLMI) on humanity. Add to this the exponential growth happening in the AI space right now, evidencing change so rapid, and it’s clear to see we’re not in control. In fact, with many of the latest capabilities in LLMs (large language models), engineers have no idea how some outcomes actually happen. What’s clear is that many use cares are already surpassing human capability. 

In AI Impacts’ The 2022 Expert Survey on Progress in AI, published in August of 2022, we learned what the experts think. One of the most compelling statistics was that 50% of AI researchers believe there is a 10% or greater chance that humans will go extinct from our inability to control AI.

This is a horrifying thought. AI is accelerating at such a pace, and business is racing to insert it into every facet of our lives without guardrails or regulation; we simply can not control it. It's just moving too fast. Eventually, many believe, it will control itself. Here are other key findings from the survey:

The aggregate forecast time to a 50% chance of HLMI was 37 years, or 2059. Many feel that the exponential growth in GAI capability we are seeing now will deliver HLMI far sooner. For example, this timeline is about eight years shorter in the six years since 2016, when the aggregate prediction put a 50% probability at 2061, or 45 years out.

The median respondent believes the probability that the long-run effect of advanced AI on humanity will be “extremely bad (e.g., human extinction)” is 5%. Many respondents were substantially more concerned: 48% of respondents gave at least a 10% chance of an extremely bad outcome. But some were much less concerned: 25% put it at 0%.

The median respondent also believes machine intelligence will probably (60%) be “vastly better than humans at all professions” within 30 years of HLMI, and the rate of global technological improvement will probably (80%) dramatically increase (e.g., by a factor of ten) as a result of machine intelligence within 30 years of HLMI.

Undoubtedly, there is much good in AI, and now we’re asking, “Will the bad outweigh the good?”

Seriously, how do we even wrap our heads around all this?