Three debates facing the AI industry: Intelligence, progress, and safety

That famous saying: “The more we know, the more we don’t know,” certainly rings true for AI.

The more we learn about AI, the less we seem to know for certain.

Experts and industry leaders often find themselves at bitter loggerheads about where AI is now and where it’s heading, failing to see eye to eye on seemingly elemental concepts like machine intelligence, consciousness, and safety.

Will machines one day surpass the intellect of their human creators? Is AI advancement accelerating towards a technological singularity, or are we on the cusp of an AI winter?

And perhaps most crucially, how can we ensure that the development of AI remains safe and beneficial when even the experts can’t agree on what the future holds?

We’re immersed in a fog of uncertainty. The best we can do is explore perspectives and come to our own informed yet fluid views in an industry constantly in flux.

Debate one: AI intelligence

With each new generation of generative AI models comes a renewed debate on machine intelligence.

Elon Musk recently fuelled debate on AI intelligence when he said, “AI will probably be smarter than any single human next year. By 2029, AI is probably smarter than all humans combined.”

Musk was immediately disputed by Meta’s chief AI scientist and eminent AI researcher, Yann LeCun, who said, “No. If it were the case, we would have AI systems that could teach themselves to drive a car in 20 hours of practice, like any 17 year-old. But we still don’t have fully autonomous, reliable self-driving, even though we (you) have millions of hours of *labeled* training data.”

This conversation indicates a small part of an ambiguous void in the opinion of AI experts and tech leaders. It’s a conversation that leads to a never-ending spiral of interpretation with no conesus, as demonstrated by the wildly contrasting views of technologists and AI leaders over the last year or so (info from Improve the News):

  • Geoffrey Hinton: “Digital intelligence” could overtake us within “5 to 20 years.”
  • Yann LeCun: Society is more likely to get “cat-level” or “dog-level” AI years before human-level AI.
  • Demis Hassabis: We may achieve “something like AGI or AGI-like in the next decade.”
  • Gary Marcus: “[W]e will eventually reach AGI… and quite possibly before the end of this century.”
  • Geoffrey Hinton: “Current AI like GPT-4 “eclipses a person” in general knowledge and could soon do so in reasoning as well.
  • Geoffrey Hinton: AI is “very close to it now” and will be “much more intelligent than us in the future.”
  • Elon Musk: “We will have, for the first time, something that is smarter than the smartest human.”
  • Elon Musk: “I’d be surprised if we don’t have AGI by [2029].”
  • Sam Altman: “[W]e could get to real AGI in the next decade.”
  • Yoshua Bengio: “Superhuman AIs” will be achieved “between a few years and a couple of decades.”
  • Dario Amodei: “Human-level” AI could occur in “two or three years.”
  • Sam Altman: AI could surpass the “expert skill level” in most fields within a decade.
  • Gary Marcus: “I don’t [think] we are all that close to machines that are more intelligent than us.”

No party is unequivocally right or wrong in the debate of machine intelligence. It ultimately hinges on one’s subjective interpretation of intelligence and how AI systems measure up against that definition.

Pessimists may point to AI’s potential risks and unintended consequences, emphasizing the need for caution and stringent safety measures. They argue that as AI systems become more autonomous and powerful, they could develop goals and behaviors misaligned with human values, leading to catastrophic outcomes.

Conversely, optimists may focus on AI’s transformative potential, envisioning a future in which machines work alongside humans to solve complex problems and drive innovation. They may downplay the risks, arguing that concerns about superintelligent AI are largely hypothetical and that the technology’s benefits far outweigh the potential drawbacks.

The crux of the issue lies in the difficulty of defining and quantifying intelligence, especially when comparing entities as disparate as humans and machines.

For example, calculators demonstrate superior speed and accuracy in mathematical computations, outperforming humans in this narrow domain. A fly has advanced neural circuits and can successfully evade our attempts to swat or catch it.

In these narrow domains and potentially limitless others, humans are bested.

Pick your examples of intelligence, and everyone can be right or wrong.

Debate two: is AI accelerating or slowing?

Is AI advancement set to accelerate or plateau and slow down?

Some argue that we’re in the midst of an AI revolution, with breakthroughs happening faster than ever. Others contend that progress has hit a plateau, and the field faces momentous challenges that could slow innovation in the coming years.

Generative AI is the culmination of decades of research and billions in funding. When ChatGPT landed in 2022, the technology had already attained a high level in research environments, setting the bar high and throwing society in at the deep end.

The resulting hype also drummed up immense funding for AI startups, from Anthropic to Inflection and Stability AI to MidJourney.

This, combined with immense internal efforts from Silicon Valley veterans Meta, Google, Amazon, Nvidia, and Microsoft, resulted in a rapid proliferation of AI tools. GPT-3 quickly morphed into heavyweight GPT-4, while competitors like LLMs like Claude 3 Opus, xAI’s Grok and Mistral, and Meta’s open-source models have also made their mark.

Some experts and technologists,  such as Sam Altman, George Hinton, Yoshio Bengio, Demis Hassabis, and Elon Musk, feel that AI acceleration has just begun.

Musk said generative AI was like “waking the demon,” whereas Altman said AI mind control was imminent in the next few years (which Musk has evidenced with recent advancements in Neuralink; see below for how one man played a game of chess through thought alone).

On the other hand, experts such as Gary Marcus and Yann LeCun feel we’re hitting brick walls, with generative AI facing an introspective period or ‘winter.’

This would be exacerbated by practical obstacles, such as rising energy costs, the limitations of brute-force computing, regulation, and material shortages.

We’ve observed how AI is exceptionally expensive, and monetization isn’t straightforward, so tech companies need to find ways to keep up the momentum so money keeps flowing into the industry.

Debate three: AI safety

Conversations on AI intelligence and progress also have implications for AI safety. If we cannot agree on what constitutes intelligence or how to measure it, how can we ensure that AI systems are designed and deployed in a way that is safe and beneficial to society?

The absence of a shared understanding of intelligence makes it challenging to establish appropriate safety measures and ethical guidelines for AI development.

To underestimate AI intelligence is to underestimate the need for AI safety controls and regulation.

Conversely, overestimating or exaggerating AI’s abilities warps perceptions and risks over-regulation. This could silo power in Big Tech, which has proven clout in lobbying and out-maneuvering legislation.

Last year, protracted X debates among Yann LeCun, George Hinton, Max Tegmark, Gary Marcus, Elon Musk, and numerous other prominent figures in the AI community highlighted deep divisions in AI safety. Big Tech has been hard at work self-regulating and creating ‘voluntary guidelines,’ with leaders actively advocating regulation.

Critics suggest that regulation enables Big Tech to reinforce market structures, rid themselves of disruptors, and set the terms of play to their liking.

On that side of the debate, experts like LeCun argue that the existential risks of AI have been overstated and are being used as a smokescreen by Big Tech companies to push for regulations that would stifle competition and consolidate their control over the industry.

LeCun and his supporters also point out that AI’s immediate risks, such as misinformation, deep fakes, and bias, are already harming people and require urgent attention.

On the other hand, Hinton, Bengio, Hassabis, and Musk have sounded the alarm about the potential existential risks of AI.

Bengio, LeCun, and Hinton, often known as the ‘godfathers of AI’ for developing neural networking, deep learning, and other AI techniques throughout the 90s and early 2000s, remain influential today. Hinton and Bengio, whose views generally align, sat in a recent rare meeting between US and Chinese researchers at the International Dialogue on AI Safety in Beijing.

The meeting culminated in a statement: “In the depths of the Cold War, international scientific and governmental coordination helped avert thermonuclear catastrophe. Humanity again needs to coordinate to avert a catastrophe that could arise from unprecedented technology.”

It has to be said that Bengio, Hinton, and numerous others are highly unlikely disingenuous. They aren’t financially aligned with Big Tech and have no reason to over-egg AI risks.

Hinton raised this point himself in an X spat with LeCun and ex-Google Brain co-founder Andrew Ng, highlighting that he left Google to speak freely about AI risks.

That doesn’t add weight to his views, but it’d be pretty far-out to question the motive of his warnings. Indeed, many great scientists have questioned AI safety over the years, including the late Profession Stephen Hawking, who viewed the technology as an existential risk.

This swirling mix of polemic exchanges leaves little space for people to occupy the middle ground, fueling generative AI’s image as a polarizing technology.

AI regulation, meanwhile, has become a geopolitical issue, with the US and China tentatively collaborating over AI safety despite escalating tensions in other departments.

So, just as experts disagree about when and how AI will surpass human capabilities, they also differ in their assessments of the risks and challenges of developing safe and beneficial AI systems.

Debates surrounding AI intelligence aren’t just principled or philosophical in nature they are also a question of governance.

When experts vehemently disagree over even the basic elements of AI intelligence and safety, regulation can’t hope to serve people’s interests.

Creating consensus will require tough realizations from experts, AI developers, governments, and society at large.

However, in addition to many other challenges, steering AI into the future will require some tech leaders and experts to admit they were wrong. And that’s not going to be easy.

The post Three debates facing the AI industry: Intelligence, progress, and safety appeared first on DailyAI.

Unlock the power of our talent network. Partner with QAT Global for your staffing needs and experience the difference of having a dedicated team of experts supporting your enterprise’s growth.

Explore Articles from QAT Global