报名:人工智能管理专家CAIM课程(上海班)
时间:6月22日、23日,聚焦ISO/IEC42001人工智能管理体系
形式:线下为主、线上同步
费用:CAIM6000元,早鸟价9折和团购价8折
地址:上海市徐汇区宜山路889号4号楼5楼SGS上海培训中心
咨询:138 1664 6268,[email protected]
预报名请咨询
2024年6 月 5 日,OpenAI 和谷歌 DeepMind 的数名重要员工联名发出公开信:对高级人工智能的潜在风险及当前缺乏对人工智能公司的监管表示担忧。
为公开信背书的业内 AI 大佬则有深度学习之父Geoffrey Hinton、图灵奖得主 Yoshua Bengio以及 AI 安全顶级学者Stuart Russell。
核心内容
1. 公开信背景
由OpenAI和Google DeepMind等前沿AI公司现任和前任员工联名签署。 信中表达了对高级人工智能(AI)潜在风险的严重关切,并呼吁加强监管。
2. 主要风险
加剧现有的不平等。 操纵和误导信息。 失控的自主AI系统可能导致人类灭绝。 AI公司和政府以及专家均已承认这些风险。
3. 当前问题
AI公司有强烈的财务动机来避免有效的监督。 现有公司治理结构不足以解决这些问题。 公司拥有大量非公开信息,但没有义务分享给政府和社会。
4. 员工面临的困境
广泛的保密协议阻止员工表达对公司风险的关切。 普通的举报人保护措施不足,因为许多风险尚未被法规覆盖。 员工担心各种形式的报复。
5. 呼吁的原则
禁止报复:公司不应签署或执行禁止“诋毁”或批评的协议。 匿名报告机制:公司应建立可验证的匿名报告程序。 支持开放批评:允许员工公开提出风险相关关切。 保护举报人:公司不应报复公开分享风险信息的员工。
关于高级人工智能的警告权利
公司不会签署或执行任何禁止对公司进行“诋毁”或批评的协议,也不会因风险相关的批评而阻碍任何既得经济利益; 公司将建立一个可以验证匿名的程序,让现有和前员工向公司董事会、监管机构以及具有相关专业知识的独立机构提出风险相关的关切; 公司将支持开放批评的文化,允许现有和前员工向公众、公司董事会、监管机构或具有相关专业知识的独立机构提出其技术的风险相关关切,只要适当地保护商业秘密和其他知识产权利益; 公司不会报复那些在其他程序失败后公开分享风险相关保密信息的现有和前员工。我们接受任何报告风险相关关切的努力都应避免不必要地泄露保密信息。因此,一旦存在一个充分的程序,允许匿名向公司董事会、监管机构和具有相关专业知识的独立机构提出关切,我们接受首先通过此类程序提出关切。然而,只要这样的程序不存在,现有和前员工应该保留向公众报告其关切的自由。
参考文献
OpenAI: “AGI would also come with serious risk of misuse, drastic accidents, and societal disruption … we are going to operate as if these risks are existential.” ⏎
Anthropic: “If we build an AI system that’s significantly more competent than human experts but it pursues goals that conflict with our best interests, the consequences could be dire … rapid AI progress would be very disruptive, changing employment, macroeconomics, and power structures … [we have already encountered] toxicity, bias, unreliability, dishonesty” ⏎
Google DeepMind: “it is plausible that future AI systems could conduct offensive cyber operations, deceive people through dialogue, manipulate people into carrying out harmful actions, develop weapons (e.g. biological, chemical), … due to failures of alignment, these AI models might take harmful actions even without anyone intending so.” ⏎
US government: “irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.” ⏎
UK government: “[AI systems] could also further concentrate unaccountable power into the hands of a few, or be maliciously used to undermine societal trust, erode public safety, or threaten international security … [AI could be misused] to generate disinformation, conduct sophisticated cyberattacks or help develop chemical weapons.” ⏎
Bletchley Declaration (29 countries represented): “we are especially concerned by such risks in domains such as cybersecurity and biotechnology, … There is potential for serious, even catastrophic, harm” ⏎
Statement on AI Harms and Policy (FAccT) (over 250 signatories): “From the dangers of inaccurate or biased algorithms that deny life-saving healthcare to language models exacerbating manipulation and misinformation, …” ⏎
Encode Justice and the Future of Life Institute: “we find ourselves face-to-face with tangible, wide-reaching challenges from AI like algorithmic bias, disinformation, democratic erosion, and labor displacement. We simultaneously stand on the brink of even larger-scale risks from increasingly powerful systems” ⏎
Statement on AI Risk (CAIS) (over 1,000 signatories): “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”⏎
下载公开信原文网址:https://righttowarn.ai/
已开课!扫码加入学习
推荐站内搜索:最好用的开发软件、免费开源系统、渗透测试工具云盘下载、最新渗透测试资料、最新黑客工具下载……
还没有评论,来说两句吧...