Finding agreements and structures to govern artificial intelligence globally won’t be easy. That doesn’t keep Nicolas Miailhe of The Future Society from trying — and asking all the right questions.
By Anders Gundersen, Research Assistant, HGHI
Artificial intelligence continues to sweep into our lives in new and surprising ways. While financial services, telecoms and high tech paved the way, other industries such as automotive, energy, retail and healthcare are not far behind in identifying and testing novel applications for AI. In 2017 alone, investments in AI start-ups more than doubled, led by the United States and China, followed by Canada, Japan, the European Union and others.
In our global, interconnected world, the rise of AI is a multi-industry, international phenomenon. How, then, do we govern such a complex driver of change in societies around the world? It’s a daunting question, but Nicolas Miailhe isn’t intimidated. To the contrary, he is leading a global effort to understand and navigate the crucial ethical questions and policy choices that emerge as AI expands.
In 2014, Miailhe founded The Future Society, and began working with government organizations, practitioners, scientists and others involved in the implementation of AI around the world. He joined us at the Harvard Global Health Institute in January to share insights and challenges from this work, as part of our ongoing seminar series on how AI is revolutionizing health care globally.
“I often compare what we need for AI with climate change,” Miailhe said. “Much like the Intergovernmental Panel on Climate Change, we need an Intergovernmental Panel on AI that many scientists around the world contribute to as we create a series of facts we can use to guide policy.”
Like other speakers in this series, Miailhe pointed out that the lack of a widely accepted definition of AI is an obstacle, and so is the fact that “science isn’t able yet to predict the impact of AI.”
To avoid popular images elicited by the term artificial intelligence – images fueled by 100 years of sci-fi storytelling and visionaries paining pictures of a world run by machines – Miailhe and his colleagues categorize AI in four distinct components: nanotechnology, biotechnology, information technology, and cognitive science, or NBIC.
Welcome to the NBIC techno-scientific revolution
In this framework, the rise of AI becomes the NBIC techno-scientific revolution, and we begin to understand that what AI is and what we think it is are drastically different. For example, when we hear AI we might think of some sort of machine modeled after the human brain that is capable of “deep learning.”
In reality, Artificial Intelligence or ‘neural networks’ are computer networks that use complex statistical modeling to, over time, become more proficient in knowing what they are being asked and how to look for answers. These networks are not “intelligent” in the way human beings are, despite, in theory, being modeled after the human brain and nervous system.
Where the technology currently stands, these capacities can be classified as artificial narrow or artificial weak intelligence. The functional definition of this, as Miailhe put it, is a “big data-driven, machine learning-centric, complex socio-technical algorithmic system powered by high performance scalable computing.”
“We have to understand that none of this is happening in a vacuum, it is a manifestation of the wider digital revolution,” Miailhe continued. “Machine learning has been around since the 1950, but it is new big data and high-performance cloud computing power that is triggering the current revolution.”
Power isn’t intelligence, however, and that is why most people in AI research continue to seek what is called artificial general – or strong – intelligence (AGI).
AGI, if achieved, will be more similar to human intelligence in that it can successfully perform intellectual tasks as well as average humans but since at this point we don’t even know what exactly human intelligence is, it remains an elusive goal. (Also note that AGI does not include consciousness, nor the intelligence that is needed to DO things with our human body, which is where robotics come into the picture.)
Given our lack of a clear understanding of intelligence, the continued study of the human brain is critical if we ever want to achieve AGI, Miailhe said. At the same time, he explained, there is a clear divide between those who think we will create AGI as early as 2030 and those who think it may never happen.
Resolving the many divides that exist around AI is but one challenge, said Miailhe. Another is global strategic and economic competition. For example, China is implementing a strategy to overtake the US’s investment in AI by 2030. There are competing national interests; and then there are competing global corporations. And a lot of it happens in a ‘the winner takes it all paradigm”, said Miailhe. “There is only one Facebook.”
“There is also a global asymmetry,” Miailhe explained. “Companies can implement internationally, but countries can not.”
For healthcare specifically, Miailhe foresees a set of important struggles: “This new technology can impact every aspect of healthcare but questions about fairness and bias are everywhere.” For example, since data input determines the recommendation, it matters whose data was used to feed the learning algorithm. If patient data was from a predominantly white population, will the recommendations still hold for patients from other ethnicities?
Also, Miailhe wondered, how do we bridge the gap between slow-moving government regulation and the rapid innovation in the business world, including healthcare? And what will AI mean for vulnerable populations? Will it increase social disparities, and if so, how do we prevent it from doing so?
To push forward these questions, Miailhe and his colleagues at The Future Society are facilitating discussions on an impressively global scale (to learn more, take a look at www.aicivicdebate.org.)
“The IPCC emerged because of the potential for cataclysmic consequences if we didn’t start to address climate change,” Miailhe said. “Similarly, I think any governance around AI will succeed because we need to acknowledge that it will be profoundly disruptive.”
This post was first published on the HGHI website on Feb. 3, 2019