Features 09.01.2024
Artificial Intelligence: Time for Average Joe to Panic?
How worried should we be about artificial intelligence? Meta’s chief AI scientist weighs in
Features 09.01.2024
How worried should we be about artificial intelligence? Meta’s chief AI scientist weighs in
Artificial intelligence, a term that once conjured images of sci-fi fantasies, has become a tangible and rapidly evolving reality. Just look at how OpenAI’s ChatGPT, the fastest-growing consumer application to date, has demonstrated the incredible potential of AI. But alongside these advancements, a rising chorus of AI cynics warn of catastrophic scenarios such as humanity losing control of runaway AI systems, including artificial general intelligence (AGI) and superintelligent systems.
“I think few people really believe in this kind of scenario or believe it’s a definite threat that cannot be stopped,” says Yann LeCun, the chief AI scientist at Meta and silver professor of the Courant Institute of Mathematical Sciences at New York University. “There are people who, I think, are much more reasonable – who think that there is real potential harm and danger that needs to be dealt with, and I agree. [But] AI is indeed part of the solution, and it’s important for people to recognise that the tech industry is actively working on correcting and mitigating the problems that have been identified.”
Indeed, most of the AI community is focused on creating systems that enhance human capabilities and solve real-world problems. Innovations like ChatGPT have already started showing AI’s immense potential in improving lives across various sectors. At the same time, AI has also made significant progress in addressing some of the issues social networks face, such as detecting and taking down hate speech more efficiently. The expectation is that AI will continue to offer more advanced solutions that could revolutionise everything from healthcare to education and beyond.
“These systems have a superficial understanding of reality”
“The AI landscape is continuously evolving, and the competitive nature of the field will likely lead to more advancements and a variety of AI products being developed and made available,” LeCun says. “As AI continues to advance, it will offer more solutions to the challenges faced by the tech industry and society. Recognising the potential of AI to be a positive force can help foster a more nuanced and balanced understanding of the technology and its impact.”
Despite the positive potential, those forecasting dire outcomes have significantly influenced the narrative around AI. These AI cynics have effectively tapped into the public’s imagination by stoking fears about scenarios where AI spirals out of human control. LeCun believes that this sensationalism not only distracts from the real benefits of AI, but also overshadows the actual risks that need attention.
“Because we as humans are language-oriented, I think the fact that these tools have been in the hands of people gives the public the impression that perhaps we are closer to human-level intelligence,” he says. “We think that when something is fluent, it’s also intelligent. But it’s not true. These systems have a superficial understanding of reality. This is one reason why they can essentially produce nonsense that sounds convincing but isn’t.”
Even though it will take more than just scaling up existing models to achieve true human-like intelligence in machines, this hasn’t stopped the panic. The Future of Life Institute called for a six-month pause in cutting-edge AI research about a year ago. It sparked a significant debate at the time, with critics arguing that such a moratorium was impractical and, even if implemented, would hinder valuable innovations without addressing real issues. More than that, the attempt by governments to enforce such a pause would have negatively impacted competition and innovation in the AI field.
“I don’t see the point of regulating research and development”
“I’m all for regulating products that get into the hands of people, but I don’t see the point of regulating research and development,” LeCun says. “I don’t think that serves any purpose other than reducing the knowledge we could use to make technology better and safer… Stopping product development is a question of whether you want to regulate products that are made available if they endanger public safety. Obviously, that’s where the government should intervene. That’s what it does for drugs, aeroplanes, cars, and just about everything that consumers can access.”
While AI’s potential for harm in extreme scenarios is often highlighted, the current focus should be on addressing its more immediate challenges. That includes issues like bias, fairness, job displacement, or the concentration of power, all of which are real concerns that the AI community is actively working to resolve. LeCun believes that the emphasis should be on constructive solutions rather than fearmongering about speculative risks. Indeed, he cautions that we must be careful about regulating technology that people want and even need.
“There were knee-jerk reactions of similar types when the printing press started popping up,” he says. “The Catholic Church was extremely resistant to it because it said it would destroy society. It enabled people to read the Bible, created the Protestant movement, and had bad side effects like religious wars in Europe for a couple of hundred years. But it also enabled the dissemination of the Enlightenment, science, rationalism, and democracy, which resulted in the creation of the United States. So this had an overall good effect… AI is going to be an amplification of human intelligence. We might see a new renaissance because of it – a new enlightenment, if you will. And why would you want to stop that?”
Clearly, the narrative around AI needs a reset. That’s why it’s essential to communicate that AI, for all its potential pitfalls, is fundamentally a tool for empowerment. LeCun believes we must maintain a balanced view in recognising its incredible benefits and the importance of mitigating its present risks. This also means maintaining a balanced approach that allows for innovation while addressing any real harm that may arise from deploying AI tools.
“We have the power to design AI systems with non-dominant, submissive objectives and obey rules aligned with humanity’s best interests”
“The key is to monitor and adapt as new technologies are developed and deployed,” he says. “Identifying real harm and taking corrective measures is an essential part of the process, just as it has been with previous technological advances, such as cars and aviation. Banning [or overly regulating] AI research would be an extreme and counterproductive measure. Instead, learning from history and implementing safety measures and regulations as needed will help ensure that AI development proceeds responsibly and that potential risks are minimised.”
In addition to maintaining a balanced view and promoting responsible AI development, LeCun emphasises that we must ensure that the benefits of the technology are accessible to everyone. Indeed, because open research plays a crucial role in fostering innovation and collaboration in the AI community, it will be interesting to see how the field continues to evolve in the coming years.
“The AI research community must continue exploring new techniques, models, and approaches to overcome these limitations and push the boundaries of what AI can do,” he says. “The democratisation of AI technology is likely to happen rapidly, with different models and capabilities becoming available on various platforms and devices. This competitive landscape will foster innovation and lead to more open solutions being developed, catering to different levels of users and research groups.”
Indeed, it seems that the democratisation of AI and the ongoing sharing of knowledge through open research will play a key role in driving advancements and shaping the future of a field that holds immense promise for the future. And while it’s crucial to be vigilant about risks, it’s equally important not to let unfounded fears overshadow the potential to benefit humanity profoundly. That means the path forward is informed optimism, where we embrace AI’s capabilities while diligently working to ensure it remains a force for good.
“Assuming that we’re smart enough to build systems to be super intelligent and not smart enough to design good objectives so that they behave properly is a strong assumption,” LeCun says. “We have the power to design AI systems with non-dominant, submissive objectives, [and] obey rules aligned with humanity’s best interests. These discussions and debates are essential in ensuring that AI development stays on a responsible and ethical path. By focusing on creating systems that benefit humanity and adhere to certain guidelines, we can minimise the potential risks and maximise the positive impact of AI.”