AGL 40.00 No Change ▼ 0.00 (0%)
AIRLINK 129.06 Decreased By ▼ -0.47 (-0.36%)
BOP 6.75 Increased By ▲ 0.07 (1.05%)
CNERGY 4.49 Decreased By ▼ -0.14 (-3.02%)
DCL 8.55 Decreased By ▼ -0.39 (-4.36%)
DFML 40.82 Decreased By ▼ -0.87 (-2.09%)
DGKC 80.96 Decreased By ▼ -2.81 (-3.35%)
FCCL 32.77 No Change ▼ 0.00 (0%)
FFBL 74.43 Decreased By ▼ -1.04 (-1.38%)
FFL 11.74 Increased By ▲ 0.27 (2.35%)
HUBC 109.58 Decreased By ▼ -0.97 (-0.88%)
HUMNL 13.75 Decreased By ▼ -0.81 (-5.56%)
KEL 5.31 Decreased By ▼ -0.08 (-1.48%)
KOSM 7.72 Decreased By ▼ -0.68 (-8.1%)
MLCF 38.60 Decreased By ▼ -1.19 (-2.99%)
NBP 63.51 Increased By ▲ 3.22 (5.34%)
OGDC 194.69 Decreased By ▼ -4.97 (-2.49%)
PAEL 25.71 Decreased By ▼ -0.94 (-3.53%)
PIBTL 7.39 Decreased By ▼ -0.27 (-3.52%)
PPL 155.45 Decreased By ▼ -2.47 (-1.56%)
PRL 25.79 Decreased By ▼ -0.94 (-3.52%)
PTC 17.50 Decreased By ▼ -0.96 (-5.2%)
SEARL 78.65 Decreased By ▼ -3.79 (-4.6%)
TELE 7.86 Decreased By ▼ -0.45 (-5.42%)
TOMCL 33.73 Decreased By ▼ -0.78 (-2.26%)
TPLP 8.40 Decreased By ▼ -0.66 (-7.28%)
TREET 16.27 Decreased By ▼ -1.20 (-6.87%)
TRG 58.22 Decreased By ▼ -3.10 (-5.06%)
UNITY 27.49 Increased By ▲ 0.06 (0.22%)
WTL 1.39 Increased By ▲ 0.01 (0.72%)
BR100 10,445 Increased By 38.5 (0.37%)
BR30 31,189 Decreased By -523.9 (-1.65%)
KSE100 97,798 Increased By 469.8 (0.48%)
KSE30 30,481 Increased By 288.3 (0.95%)

SEOUL: Some of the world’s biggest tech companies pledged to work together to guard against the dangers of artificial intelligence as they wrapped up a two-day AI summit, also attended by multiple governments, in Seoul.

Sector leaders from South Korea’s Samsung Electronics to Google promised at the event, co-hosted with Britain, to “minimise risks” and develop new AI models responsibly, even as they push to move the cutting-edge field forward.

The fresh commitment, codified in a so-called Seoul AI Business Pledge Wednesday plus a new round of safety commitments announced the previous day, build on the consensus reached at the inaugural global AI safety summit at Bletchley Park in Britain last year.

Tuesday’s commitment saw companies including OpenAI and Google DeepMind promise to share how they assess the risks of their technology — including those “deemed intolerable” and how they will ensure such thresholds are not crossed.

But experts warned it was hard for regulators to understand and manage AI when the sector was developing so rapidly.

“I think that’s a really, really big problem,” said Markus Anderljung, head of policy at the Centre for the Governance of AI, a non-profit research body based in Oxford, Britain.

“Dealing with AI, I expect to be one of the biggest challenges that governments all across the world will have over the next couple of decades.”

“The world will need to have some kind of joint understanding of what are the risks from these sort of most advanced general models,” he said.

Michelle Donelan, UK Secretary of State for Science, Innovation and Technology, said in Seoul on Wednesday that “as the pace of AI development accelerates, we must match that speed... if we are to grip the risks.”

She said there would be more opportunities at the next AI summit in France to “push the boundaries” in terms of testing and evaluating new technology.

“Simultaneously, we must turn our attention to risk mitigation outside these models, ensuring that society as a whole becomes resilient to the risks posed by AI,” Donelan said.

The stratospheric success of ChatGPT soon after its 2022 release sparked a gold rush in generative AI, with tech firms around the world pouring billions of dollars into developing their own models.

Such AI models can generate text, photos, audio and even video from simple prompts and its proponents have heralded them as breakthroughs that will improve lives and businesses around the world.

However, critics, rights activists and governments have warned that they can be misused in a wide variety of ways, including the manipulation of voters through fake news stories or “deepfake” pictures and videos of politicians.

Many have called for international standards to govern the development and use of AI.

“I think there’s increased realisation that we need global cooperation to really think about the issues and harms of artificial intelligence. AI doesn’t know borders,” said Rumman Chowdhury, an AI ethics expert who leads Humane Intelligence, an independent non-profit that evaluates and assesses AI models.

Chowdhury told AFP that it is not just the “runaway AI” of science fiction nightmares that is a huge concern, but issues such as rampant inequality in the sector.

Comments

Comments are closed.