AGL 38.02 Increased By ▲ 0.08 (0.21%)
AIRLINK 197.36 Increased By ▲ 3.45 (1.78%)
BOP 9.54 Increased By ▲ 0.22 (2.36%)
CNERGY 5.91 Increased By ▲ 0.07 (1.2%)
DCL 8.82 Increased By ▲ 0.14 (1.61%)
DFML 35.74 Decreased By ▼ -0.72 (-1.97%)
DGKC 96.86 Increased By ▲ 4.32 (4.67%)
FCCL 35.25 Increased By ▲ 1.28 (3.77%)
FFBL 88.94 Increased By ▲ 6.64 (8.07%)
FFL 13.17 Increased By ▲ 0.42 (3.29%)
HUBC 127.55 Increased By ▲ 6.94 (5.75%)
HUMNL 13.50 Decreased By ▼ -0.10 (-0.74%)
KEL 5.32 Increased By ▲ 0.10 (1.92%)
KOSM 7.00 Increased By ▲ 0.48 (7.36%)
MLCF 44.70 Increased By ▲ 2.59 (6.15%)
NBP 61.42 Increased By ▲ 1.61 (2.69%)
OGDC 214.67 Increased By ▲ 3.50 (1.66%)
PAEL 38.79 Increased By ▲ 1.21 (3.22%)
PIBTL 8.25 Increased By ▲ 0.18 (2.23%)
PPL 193.08 Increased By ▲ 2.76 (1.45%)
PRL 38.66 Increased By ▲ 0.49 (1.28%)
PTC 25.80 Increased By ▲ 2.35 (10.02%)
SEARL 103.60 Increased By ▲ 5.66 (5.78%)
TELE 8.30 Increased By ▲ 0.08 (0.97%)
TOMCL 35.00 Decreased By ▼ -0.03 (-0.09%)
TPLP 13.30 Decreased By ▼ -0.25 (-1.85%)
TREET 22.16 Decreased By ▼ -0.57 (-2.51%)
TRG 55.59 Increased By ▲ 2.72 (5.14%)
UNITY 32.97 Increased By ▲ 0.01 (0.03%)
WTL 1.60 Increased By ▲ 0.08 (5.26%)
BR100 11,727 Increased By 342.7 (3.01%)
BR30 36,377 Increased By 1165.1 (3.31%)
KSE100 109,513 Increased By 3238.2 (3.05%)
KSE30 34,513 Increased By 1160.1 (3.48%)

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in training systems more powerful than OpenAI’s newly launched model GPT-4, they said in an open letter, citing potential risks to society and humanity.

The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, as well as AI heavyweights Yoshua Bengio and Stuart Russell, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.

The letter also detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities.

The letter comes as EU police force Europol on Monday joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.

Musk, whose carmaker Tesla is using AI for an autopilot system, has been vocal about his concerns about AI.

Elon Musk puts Twitter’s value at just $20 billion

Since its release last year, Microsoft-backed OpenAI’s ChatGPT has prompted rivals to accelerate developing similar large language models, and companies to integrate generative AI models into their products.

Sam Altman, chief executive at OpenAI, hasn’t signed the letter, a spokesperson at Future of Life told Reuters.

OpenAI didn’t immediately respond to request for comment. “The letter isn’t perfect, but the spirit is right: we need to slow down until we better understand the ramifications,” said Gary Marcus, an emeritus professor at New York University who signed the letter.

“They can cause serious harm the big players are becoming increasingly secretive about what they are doing, which makes it hard for society to defend against whatever harms may materialize.”

Comments

Comments are closed.