Technology fundamentally has shaped geopolitics, economics for a long time: Anne Neuberger
WASHINGTON: Anne Neuberger, Deputy National Security Advisor for Cyber and Emerging Technology at the National Security Council, said the United States approach to Artificial Intelligence (AI) and issues around it is from an international level as well.
“We’re approaching this not only at the US level, but also the international stage,” said Neuberger while speaking to journalists at the Foreign Press Center on Friday.
“As you know, there’s an effort in the G7, there’s an effort under the Hiroshima process, to ensure that as a group of countries we’re setting international norms.”
Underscoring need for regulations, US-based expert says AI has potential to be catastrophic
The US is a strong country and is expected to make large AI companies behave responsibly.
The Biden-Harris Administration has already secured voluntary commitments from leading companies to manage the risks posed by AI.
But the question is: will those large companies behave responsibly when they operate in countries such as Pakistan where there are weak governments and lack of understanding of potential threats?
“And then our goal is, both in the executive order that’s focused, as you know, on the US but also on the potential legislation that will guide the way the companies operate around the world. And that is our goal,” Neuberger said while responding to a query by Business Recorder.
“By setting the standard in law, we are also working with other countries to say these are what we believe the appropriate controls so that can they can then be used by other countries to enforce as well, but also as a way for us to say how do we balance innovation and risk,” she added.
“And you saw when you were on – in the (Capitol) Hill yesterday how much folks on the Hill are thinking hard about these issues, bringing people in from civil society, from academia, and the countries involving others to really outline the way ahead that isn’t just for the US, but that sets the international norms, sets the – what we believe should be the norms for behaviour in this space as well.”
Why is AI important?
Neuberger says technology has long shaped foreign policy. Countries that adapted technology growth powered their economies, attracted skilled labour, drove productivity and economic growth.
Technology fundamentally has shaped geopolitics and economics for a long time, Neuberger said.
“And we can see the advancements in technology that are poised to define the geopolitical era of the future; for example, the combination of AI, advanced telecommunications, and sensors will generate breakthroughs in drug discovery, food security in an age of extreme weather, and clean energy in era where we’re optimally fighting climate change.
“It will also enable novel military and intelligence capabilities that will shape our collective security. And this is a group that has covered technology and policy for a long time, so I know you see that arc both with its promise and peril.”
Neuberger said that in the US, they are carefully considering the national security implications of AI, including risks and opportunities, as well as tangible trust and safety mechanisms that could help achieve the promise, including the confidence of citizens its usage in economies and society.
“And we want to achieve that promise together with key allies and partners, which is why you are here. Because international collaborations can ensure we all have equitable access to the promise of emerging technologies.
“Last year, between the U.S. and the European Union, we signed an administrative agreement focused on AI for public good, to drive both progress in AI and related privacy protecting technologies in five areas: one is health – there are 11 areas of partnership underneath, including building advanced models for more effective cancer detection, building advanced models for more effective cardiac treatments. There’s a second line of work around extreme weather prediction,” she said.
“Our international cooperation is focused on managing the risks and proving that AI can be done in a way that respects human rights and fundamental freedom, while providing that benefit. We believe we can generate the benefit of better cancer prediction models without also predicting individuals’ private health information. And that’s one of the goals as well,” she added.
Copyright Business Recorder, 2023
Comments
Comments are closed.