EU police force Europol on Monday warned about the potential misuse of artificial intelligence-powered chatbot ChatGPT in phishing attempts, disinformation and cybercrime, adding to the chorus of concerns ranging from legal to ethical issues.
Since its release last year, Microsoft-backed OpenAI's ChatGPT has set off a tech craze, prompting rivals to launch similar products and companies to integrate it or similar technologies into their apps and products.
"As the capabilities of LLMs (large language models) such as ChatGPT are actively being improved, the potential exploitation of these types of AI systems by criminals provide a grim outlook," Europol said as it presented its first tech report starting with the chatbot.
In response to the growing public attention given to ChatGPT, the Europol Innovation Lab organised a number of workshops with subject matter experts from across the organisation to explore how criminals can abuse LLMs, as well as how it may assist investigators in their daily work.
"ChatGPT's ability to draft highly realistic text makes it a useful tool for phishing purposes," Europol said. With its ability to reproduce language patterns to impersonate the style of speech of specific individuals or groups, the chatbot could be used by criminals to target victims, the EU enforcement agency said.
It said ChatGPT's ability to churn out authentic sounding text at speed and scale also makes it an ideal tool for propaganda and disinformation. "It allows users to generate and spread messages reflecting a specific narrative with relatively little effort."
此外,即使犯罪分子沒有太多的科技知識,也可以把ChatGPT變?yōu)樯蓯阂獯a的工具。
Criminals with little technical knowledge could turn to ChatGPT to produce malicious code, Europol said.
Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society.
這封公開信由非營利組織未來生命研究所(Future of Life Institute, FLI)公布,目前顯示有包括馬斯克在內(nèi)的1000多人簽名參與。
"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," said the letter issued by the Future of Life Institute.
The non-profit is primarily funded by the Musk Foundation, as well as London-based group Founders Pledge, and Silicon Valley Community Foundation, according to the European Union's transparency register.
AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.
我們是否應(yīng)該讓機(jī)器用宣傳和謊言淹沒我們的信息渠道?
Should we let machines flood our information channels with propaganda and untruth?
我們是否應(yīng)該讓所有的工作自動化,包括那些令人滿意的工作?
Should we automate away all the jobs, including the fulfilling ones?
21世紀(jì)報社版權(quán)所有,未經(jīng)書面授權(quán),禁止轉(zhuǎn)載或建立鏡像。 主辦單位:中國日報社 Copyright by 21st Century English Education Media All Rights Reserved 版權(quán)所有 復(fù)制必究 京ICP備2024066071號-1京公網(wǎng)安備 11010502033664號