麻豆亚洲精品在线播放,午夜在线视频91精品,麻豆视频一区二区,人人干在线观看,99久久精品国产免费,久久视频在线直播

Hi,
訂閱
報紙
紙質(zhì)報紙 電子報紙
手機(jī)訂閱 微商城
英語
學(xué)習(xí)
雙語學(xué)習(xí) 熱點(diǎn)翻譯 英語視頻
實(shí)用英語 報紙聽力 TEENS對話
教育
信息
最新動態(tài) 活動預(yù)告
備課資源 語言文化
演講
比賽
精彩演講
活動動態(tài)
用報
專區(qū)
高中   初中
小學(xué)   畫刊
聽力頻道 > 英語視頻 > 正文
這兩個國家的電信詐騙犯,已經(jīng)用上AI了,效果以假亂真,多人上當(dāng)受騙!
來源:融媒體采編平臺    作者:世紀(jì)君   日期: 2023-04-04

據(jù)外媒報道,近來,美國和加拿大使用AI合成語音進(jìn)行電信詐騙的案例多發(fā),不少上當(dāng)?shù)亩际抢夏耆?。這一現(xiàn)象引起了業(yè)界和媒體的關(guān)注。

(視頻戳這里
 

據(jù)美國國家公共電臺(NPR)3月22日報道,多年來,一個常見的電信騙局(scam)是,一個自稱是權(quán)威人士比如警察的人給你打電話,緊急要求你付錢幫助你的朋友或家人擺脫困境。

NBC新聞視頻截圖

 

然而,美國聯(lián)邦監(jiān)管機(jī)構(gòu)警告說,現(xiàn)在這樣的電話可能來自一個聽起來很像你朋友或家人的人,但實(shí)際上是一個騙子在使用AI生成的克隆聲音。
 

For years, a common scam has involved getting a call from someone purporting to be an authority figure, like a police officer, urgently asking you to pay money to help get a friend or family member out of trouble.
 

Now, federal regulators warn, such a call could come from someone who sounds just like that friend or family member — but is actually a scammer using a clone of their voice.


美國聯(lián)邦貿(mào)易委員會(Federal Trade Commission)近期發(fā)布了一份消費(fèi)者警告,敦促人們警惕犯罪分子用來詐騙人們錢財?shù)淖钚录夹g(shù)——使用人工智能生成語音的詐騙電話。

The Federal Trade Commission recently issued a consumer alert urging people to be vigilant for calls using voice clones generated by artificial intelligence, one of the latest techniques used by criminals hoping to swindle people out of money.
 

美國國家公共電臺:親戚打來的緊急電話?美國聯(lián)邦貿(mào)易委員會警告說,這可能是一個使用語音克隆的小偷打來的

美國國家公共電臺報道截圖

 

“騙子(scammer)所需要的只是一段你家人聲音的短音頻以及一個聲音克隆程序,這段音頻可以從網(wǎng)上發(fā)布的內(nèi)容中獲取。” 該委員會警告稱,“當(dāng)騙子打電話給你時,他聽起來就像你所愛的人一模一樣。”
 

"All [the scammer] needs is a short audio clip of your family member's voice — which he could get from content posted online — and a voice-cloning program," the commission warned. "When the scammer calls you, he'll sound just like your loved one."
 

美國聯(lián)邦貿(mào)易委員會建議,如果有聽起來像朋友或親戚的人管你要錢,特別是如果他們想讓你通過電匯、加密貨幣或禮品卡支付,你應(yīng)該掛斷電話,直接打電話給那個人,以核實(shí)他們的說法。
 

The FTC suggests that if someone who sounds like a friend or relative asks for money — particularly if they want to be paid via a wire transfer, cryptocurrency or a gift card — you should hang up and call the person directly to verify their story.
 

騙子使用AI克隆聲音冒充家人,騙取老人錢財
 

《華盛頓郵報》在3月5日的一篇相關(guān)報道中提到了兩個真實(shí)案例。


《華盛頓郵報》:“他們以為是親人在求救。其實(shí)是一個人工智能騙局。”

 
《華盛頓郵報》報道截圖


73歲的露絲·卡德和她75歲的丈夫格雷格·格雷斯就曾接到一通自稱是他們的孫子布蘭登的人打來的電話。電話中,騙子冒充的布蘭登說自己在監(jiān)獄里,沒有錢包和手機(jī),需要現(xiàn)金保釋。
 

由于電話里的聲音聽起來跟孫子幾乎沒有差別,兩位老人焦急的沖到銀行,取走了每日最高限額的3000加元(約合人民幣15289元),之后又急著去第二家分行取錢。


好在,一位銀行經(jīng)理察覺到不對頭,提醒他們最近另一位顧客也接到了類似的電話,并得知那個聲音是偽造的。兩位老人這才意識到自己被騙了。


The man calling Ruth Card sounded just like her grandson Brandon. So when he said he was in jail, with no wallet or cellphone, and needed cash for bail, Card scrambled to do whatever she could to help.


Card, 73, and her husband, Greg Grace, 75, dashed to their bank in Regina, Saskatchewan, and withdrew 3,000 Canadian dollars, the daily maximum. They hurried to a second branch for more money. But a bank manager pulled them into his office: Another patron had gotten a similar call and learned the eerily accurate voice had been faked, Card recalled the banker saying. The man on the phone probably wasn’t their grandson.


另一位受害者——本杰明·珀金(Benjamin Perkin)年邁的父母——則在一次AI合成語音詐騙中損失了數(shù)萬加元。
 

珀金在采訪中回憶,他的父母當(dāng)時接到一個自稱是律師的電話,說珀金在一場車禍中撞死了一名美國外交官,現(xiàn)在在監(jiān)獄里,需要錢支付法律費(fèi)用。
 

Benjamin Perkin’s elderly parents lost thousands of dollars to a voice scam. His parents received a phone call from an alleged lawyer, saying their son had killed a U.S. diplomat in a car accident. Perkin was in jail and needed money for legal fees.
 

之后,這位“律師”把電話轉(zhuǎn)交給了“珀金”。電話里,珀金的“克隆”聲音對父母說他愛他們,感激他們,急需錢。幾個小時后,律師再次打電話給珀金的父母,說珀金在當(dāng)天晚些時候開庭前需要21,000加元(約合人民幣107039元)。
 

The lawyer put Perkin, 39, on the phone, who said he loved them, appreciated them and needed the money. A few hours later, the lawyer called Perkin’s parents again, saying their son needed $21,000 in Canadian dollars before a court date later that day.
 

電話里的聲音聽起來“很逼真,以至于我的父母真的相信他們在和我說話,” 珀金在采訪中講道。他的父母驚慌失措地跑到幾家銀行取錢,之后通過比特幣終端把錢匯給了律師。
 

The voice sounded “close enough for my parents to truly believe they did speak with me,” he said. In their state of panic, they rushed to several banks to get cash and sent the lawyer the money through a bitcoin terminal.
 

珀金說,他們已經(jīng)向加拿大聯(lián)邦當(dāng)局提交了一份警方報告,但目前并沒能把錢追回來。


The family has filed a police report with Canada’s federal authorities, Perkin said, but that hasn’t brought the cash back.


美國全國廣播公司(NBC)也在近日采訪了一位美國父親,當(dāng)時他接到了一個聽起來像他女兒的電話,說她被綁架了,然后劫匪拿過電話,要求他支付贖金。幸運(yùn)的是,他的妻子很警覺,立刻給女兒打了電話確認(rèn)其是否安全,才發(fā)現(xiàn)這是一通用人工智能合成語音的詐騙電話。

NBC新聞視頻截圖

 

只需一小段發(fā)布在社交媒體的音頻,就能克隆聲音
 

《華盛頓郵報》報道稱,在人工智能的加持下,大量廉價的在線工具可以將音頻文件轉(zhuǎn)成克隆的聲音,讓騙子可以用這種工具“說出”他們輸入的任何內(nèi)容。
 

Powered by AI, a slew of cheap online tools can translate an audio file into a replica of a voice, allowing a swindler to make it “speak” whatever they type.
 

美國加州大學(xué)伯克利分校數(shù)字取證教授哈尼·法里德(Hany Farid)介紹說,人工智能語音生成軟件會分析一個人的聲音獨(dú)特之處——包括年齡、性別和口音——并在龐大的聲音數(shù)據(jù)庫中搜索,找到相似的聲音并預(yù)測其模式。
 

AI voice-generating software analyzes what makes a person’s voice unique — including age, gender and accent — and searches a vast database of voices to find similar ones and predict patterns, Farid said.
 

然后,它可以重新創(chuàng)造一個人的音調(diào)、音色和個人獨(dú)特的聲音,從而創(chuàng)造出類似的整體效果。法里德說,軟件需要一個簡短的音頻樣本,從YouTube、播客、廣告、TikTok、Instagram或Facebook視頻等地方就能截取。
 

It can then re-create the pitch, timbre and individual sounds of a person’s voice to create an overall effect that is similar, he added. It requires a short sample of audio, taken from places such as YouTube, podcasts, commercials, TikTok, Instagram or Facebook videos, Farid said.
 

“兩年前,甚至一年前,你需要大量音頻來克隆一個人的聲音,” 法里德表示。“而現(xiàn)在,如果你用臉書,或者你錄制了一段TikTok視頻,你的聲音在上面播放了30秒,人們就可以克隆你的聲音。”
 

Two years ago, even a year ago, you needed a lot of audio to clone a persons voice,” said Hany Farid, a professor of digital forensics at the University of California at Berkeley. “Now … if you have a Facebook page … or if you’ve recorded a TikTok and your voice is in there for 30 seconds, people can clone your voice.”

NBC新聞視頻截圖

 

然而,專家表示,目前,美國聯(lián)邦監(jiān)管機(jī)構(gòu)、執(zhí)法部門和法院都沒有能力遏制這種迅速發(fā)展的騙局。大多數(shù)受害者幾乎沒有線索可以確定犯罪者是誰,警方也很難追蹤來自世界各地騙子的電話和資金。而且?guī)缀鯖]有法律先例讓法院追究生產(chǎn)這些工具的公司的責(zé)任。
 

Experts say federal regulators, law enforcement and the courts are ill-equipped to rein in the burgeoning scam. Most victims have few leads to identify the perpetrator and it’s difficult for the police to trace calls and funds from scammers operating across the world. And there’s little legal precedent for courts to hold the companies that make the tools accountable for their use.
 

另據(jù)《商業(yè)內(nèi)幕》網(wǎng)報道,2022年6月,美國聯(lián)邦貿(mào)易委員會建議美國國會通過法律,以防止人工智能工具造成額外的傷害。
 

In June 2022, the FTC recommended Congress pass laws so AI tools do not cause additional harm.
 

美國聯(lián)邦貿(mào)易委員會發(fā)言人朱莉安娜·格林瓦爾德說:“我們擔(dān)心深度造假(deepfake)和其他基于人工智能的合成媒體的風(fēng)險,這些媒體變得越來越容易創(chuàng)作和傳播,它們將被用于欺詐。”
 

"We're also concerned with the risk that deepfakes and other AI-based synthetic media, which are becoming easier to create and disseminate, will be used for fraud," FTC spokesperson Juliana Gruenwald told Insider.

 

她還表示,“聯(lián)邦貿(mào)易委員會已經(jīng)發(fā)現(xiàn)社交媒體上的欺詐行為(fraud)出現(xiàn)了驚人的增長。”
 

“生成看似真實(shí)的視頻、照片、音頻和文本的人工智能工具可能會加速這一趨勢,讓騙子能對更多人、更迅速地實(shí)施詐騙”,從冒名頂替騙局和身份盜竊(identity theft),到支付欺詐(payment fraud)和虛假網(wǎng)站創(chuàng)建(fake website creation)。格林沃爾德說,聊天機(jī)器人(chatbot)可能會加劇這些趨勢。

 

"The FTC has already seen a staggering rise in fraud on social media," she said. "AI tools that generate authentic-seeming videos, photos, audio, and text could supercharge this trend, allowing fraudsters greater reach and speed," from imposter scams and identity theft to payment fraud and fake website creation. Chatbots could exacerbate these trends, Gruenwald said.
 

當(dāng)?shù)貢r間2019年1月24日,美國華盛頓,一名女子觀看一段視頻,這段視頻篡改了美國總統(tǒng)特朗普和前總統(tǒng)奧巴馬的言論,顯示了“深度偽造”技術(shù)是如何欺騙觀眾的。圖源:視覺中國

 

聯(lián)合國教科文組織:呼吁各國盡快實(shí)施人工智能倫理標(biāo)準(zhǔn)


據(jù)新華社報道,3月30日,聯(lián)合國教科文組織(UNESCO)總干事奧德蕾·阿祖萊(Audrey Azoulay)發(fā)表聲明,呼吁各國盡快實(shí)施該組織通過的《人工智能倫理問題建議書》(Recommendation on the Ethics of Artificial Intelligence),為人工智能發(fā)展設(shè)立倫理標(biāo)準(zhǔn)。
 

The United Nations Educational, Scientific and Cultural Organization (UNESCO) called on governments last Thursday to fully and immediately implement its recommendation on the ethics of artificial intelligence (AI).
 

聯(lián)合國教科文組織在2021年11月通過了《人工智能倫理問題建議書》,這是首份涉及人工智能倫理標(biāo)準(zhǔn)的全球性協(xié)議。該協(xié)議包含人工智能發(fā)展的規(guī)范以及相關(guān)應(yīng)用領(lǐng)域的政策建議,旨在最大程度發(fā)揮人工智能的優(yōu)勢并降低其風(fēng)險。

聯(lián)合國教科文組織官網(wǎng)截圖

 

The recommendation was endorsed by all UNESCO member states in November 2021. It is the first global framework for the ethical use of AI.
 

It guides countries in maximizing the benefits of AI and reducing the risks it entails. It contains values and principles, and detailed policy recommendations in all relevant areas.
 

阿祖萊30日在聲明中說,世界需要更強(qiáng)有力的人工智能倫理規(guī)則,“這是當(dāng)前時代的挑戰(zhàn)。聯(lián)合國教科文組織的《建議書》設(shè)立了適當(dāng)?shù)囊?guī)范框架。現(xiàn)在是時候在國家層面實(shí)施這些戰(zhàn)略和法規(guī)了。我們要言出必行,確保實(shí)現(xiàn)《建議書》的目標(biāo)和內(nèi)容。”
 

"The world needs stronger ethical rules for artificial intelligence: this is the challenge of our time. UNESCO's recommendation on the ethics of AI sets the appropriate normative framework and provides all the necessary safeguards," UNESCO Director-General Audrey Azoulay said in a press release, “It is high time to implement the strategies and regulations at national level. We have to walk the talk and ensure we deliver on the Recommendation’s objectives.”
 

聯(lián)合國教科文組織表示,人工智能創(chuàng)新可能會引發(fā)倫理問題,尤其是歧視和刻板印象、性別不平等倫理問題。人工智能還可能對打擊虛假信息、隱私權(quán)、個人信息數(shù)據(jù)保護(hù)以及人權(quán)和環(huán)境等問題產(chǎn)生負(fù)面影響。
 

According to UNESCO, AI innovations may raise ethical issues, especially discrimination and stereotyping, including the issue of gender inequality. AI may also have a negative impact on the fight against disinformation, the right to privacy, the protection of personal data, and human and environmental rights.
 

根據(jù)聲明的說法,聯(lián)合國教科文組織的《建議書》將使用一種評估工具來指導(dǎo)各成員國。這一工具可以幫助各國確定勞動力所需的能力和技能,以確保對人工智能領(lǐng)域強(qiáng)有力的監(jiān)管。協(xié)議還規(guī)定,各國需要每四年提交一次報告,定期報告在人工智能領(lǐng)域的進(jìn)展和實(shí)踐。
 

UNESCO’s Recommendation places a Readiness Assessment tool at the core of its guidance to Member States. This tool enables countries to ascertain the competencies and skills required in the workforce to ensure robust regulation of the artificial intelligence sector. It also provides that the States report regularly on their progress and their practices in the field of artificial intelligence, in particular by submitting a periodic report every four years.
 

聲明指出,迄今為止,有40多個國家已經(jīng)與聯(lián)合國教科文組織展開合作,依據(jù)《建議書》在國家層面制定人工智能規(guī)范措施。該組織呼吁所有國家加入這一行動。預(yù)計將于今年12月在斯洛文尼亞召開的聯(lián)合國教科文組織全球人工智能倫理問題論壇(UNESCO Global Forum on the Ethics of Artificial Intelligence)上,還將發(fā)布相關(guān)進(jìn)展報告。
 

To this date, more than 40 countries in all regions of the world are already working with UNESCO to develop AI checks and balances at the national level, building on the Recommendation. UNESCO calls on all countries to join the movement it is leading to build an ethical AI. A progress report will be presented at the UNESCO Global Forum on the Ethics of Artificial Intelligence in Slovenia in December 2023.
 

綜合來源:華盛頓郵報, NPR,商業(yè)內(nèi)幕網(wǎng),新華網(wǎng),聯(lián)合國教科文組織

 





 
訂閱更精彩

 主辦
21世紀(jì)報社版權(quán)所有,未經(jīng)書面授權(quán),禁止轉(zhuǎn)載或建立鏡像。
主辦單位:中國日報社 Copyright by 21st Century English Education Media All Rights Reserved 版權(quán)所有 復(fù)制必究
京ICP備2024066071號-1   京公網(wǎng)安備 11010502033664號

關(guān)閉
內(nèi)容