AGL 37.50 Decreased By ▼ -0.08 (-0.21%)
AIRLINK 222.89 Increased By ▲ 0.46 (0.21%)
BOP 10.82 Decreased By ▼ -0.14 (-1.28%)
CNERGY 7.56 Decreased By ▼ -0.10 (-1.31%)
DCL 9.42 Decreased By ▼ -0.21 (-2.18%)
DFML 40.96 Decreased By ▼ -0.74 (-1.77%)
DGKC 106.76 Decreased By ▼ -3.99 (-3.6%)
FCCL 37.07 Decreased By ▼ -0.99 (-2.6%)
FFL 19.24 Increased By ▲ 0.95 (5.19%)
HASCOL 13.18 Decreased By ▼ -0.19 (-1.42%)
HUBC 132.64 Decreased By ▼ -2.32 (-1.72%)
HUMNL 14.73 Decreased By ▼ -0.86 (-5.52%)
KEL 5.40 Decreased By ▼ -0.16 (-2.88%)
KOSM 7.48 Increased By ▲ 0.07 (0.94%)
MLCF 48.18 Decreased By ▼ -2.15 (-4.27%)
NBP 66.29 Decreased By ▼ -0.18 (-0.27%)
OGDC 223.26 Decreased By ▼ -5.35 (-2.34%)
PAEL 43.50 Increased By ▲ 0.13 (0.3%)
PIBTL 9.07 Decreased By ▼ -0.23 (-2.47%)
PPL 198.24 Decreased By ▼ -4.89 (-2.41%)
PRL 42.24 Decreased By ▼ -0.62 (-1.45%)
PTC 27.39 Increased By ▲ 0.06 (0.22%)
SEARL 110.08 Increased By ▲ 3.06 (2.86%)
TELE 10.52 Increased By ▲ 0.74 (7.57%)
TOMCL 36.62 Decreased By ▼ -0.01 (-0.03%)
TPLP 14.95 Decreased By ▼ -0.28 (-1.84%)
TREET 26.53 Decreased By ▼ -0.26 (-0.97%)
TRG 68.85 Decreased By ▼ -1.30 (-1.85%)
UNITY 34.19 No Change ▼ 0.00 (0%)
WTL 1.79 Increased By ▲ 0.03 (1.7%)
BR100 12,363 Decreased By -32.9 (-0.27%)
BR30 38,218 Decreased By -629.2 (-1.62%)
KSE100 117,120 Increased By 111.6 (0.1%)
KSE30 36,937 Increased By 72.2 (0.2%)

SEOUL: Some of the world’s biggest tech companies pledged to work together to guard against the dangers of artificial intelligence as they wrapped up a two-day AI summit, also attended by multiple governments, in Seoul.

Sector leaders from South Korea’s Samsung Electronics to Google promised at the event, co-hosted with Britain, to “minimise risks” and develop new AI models responsibly, even as they push to move the cutting-edge field forward.

The fresh commitment, codified in a so-called Seoul AI Business Pledge Wednesday plus a new round of safety commitments announced the previous day, build on the consensus reached at the inaugural global AI safety summit at Bletchley Park in Britain last year.

Tuesday’s commitment saw companies including OpenAI and Google DeepMind promise to share how they assess the risks of their technology — including those “deemed intolerable” and how they will ensure such thresholds are not crossed.

But experts warned it was hard for regulators to understand and manage AI when the sector was developing so rapidly.

“I think that’s a really, really big problem,” said Markus Anderljung, head of policy at the Centre for the Governance of AI, a non-profit research body based in Oxford, Britain.

“Dealing with AI, I expect to be one of the biggest challenges that governments all across the world will have over the next couple of decades.”

“The world will need to have some kind of joint understanding of what are the risks from these sort of most advanced general models,” he said.

Michelle Donelan, UK Secretary of State for Science, Innovation and Technology, said in Seoul on Wednesday that “as the pace of AI development accelerates, we must match that speed... if we are to grip the risks.”

She said there would be more opportunities at the next AI summit in France to “push the boundaries” in terms of testing and evaluating new technology.

“Simultaneously, we must turn our attention to risk mitigation outside these models, ensuring that society as a whole becomes resilient to the risks posed by AI,” Donelan said.

The stratospheric success of ChatGPT soon after its 2022 release sparked a gold rush in generative AI, with tech firms around the world pouring billions of dollars into developing their own models.

Such AI models can generate text, photos, audio and even video from simple prompts and its proponents have heralded them as breakthroughs that will improve lives and businesses around the world.

However, critics, rights activists and governments have warned that they can be misused in a wide variety of ways, including the manipulation of voters through fake news stories or “deepfake” pictures and videos of politicians.

Many have called for international standards to govern the development and use of AI.

“I think there’s increased realisation that we need global cooperation to really think about the issues and harms of artificial intelligence. AI doesn’t know borders,” said Rumman Chowdhury, an AI ethics expert who leads Humane Intelligence, an independent non-profit that evaluates and assesses AI models.

Chowdhury told AFP that it is not just the “runaway AI” of science fiction nightmares that is a huge concern, but issues such as rampant inequality in the sector.

Comments

Comments are closed.