Google has announced that it's expanding its Vulnerability Rewards Program (VRP) to compensate researchers for finding attack scenarios tailored to generative artificial intelligence (AI) systems in an effort to bolster AI safety and security.
Google宣布扩展其漏洞奖励计划(VRP),以赔偿研究人员发现定制生成人工智能(AI)系统的攻击场景,以增强AI安全性。
"Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or misinterpretations of data (hallucinations)," Google's Laurie Richardson and Royal Hansen said.
“生成式AI引发了与传统数字安全不同的新问题,如不公平偏见的潜力、模型操纵或数据误解(幻觉)等,”Google的Laurie Richardson和Royal Hansen表示。
Some of the categories that are in scope include prompt injections, leakage of sensitive data from training datasets, model manipulation, adversarial perturbation attacks that trigger misclassification, and model theft.
在范围内的一些类别包括提示注入、从训练数据集泄露敏感数据、模型操纵、触发误分类的对抗性扰动攻击以及模型盗窃。
It's worth noting that Google earlier this July instituted an AI Red Team to help address threats to AI systems as part of its Secure AI Framework (SAIF).
值得注意的是,今年7月早些时候,Google设立了一个AI红队,以帮助解决AI系统面临的威胁,作为其安全AI框架(SAIF)的一部分。
Also announced as part of its commitment to secure AI are efforts to strengthen the AI supply chain via existing open-source security initiatives such as Supply Chain Levels for Software Artifacts (SLSA) and Sigstore.
作为确保AI安全承诺的一部分,还宣布了通过现有的开源安全倡议来加强AI供应链的努力,如软件工件的供应链级别(SLSA)和Sigstore。
"Digital signatures, such as those from Sigstore, which allow users to verify that the software wasn't tampered with or replaced," Google said.
“数字签名,例如来自Sigstore的签名,允许用户验证软件未被篡改或替换,”Google表示。
"Metadata such as SLSA provenance that tell us what's in software and how it was built, allowing consumers to ensure license compatibility, identify known vulnerabilities, and detect more advanced threats."
“元数据,如SLSA来源,告诉我们软件的内容以及它是如何构建的,使消费者能够确保许可证兼容性,识别已知的漏洞,并检测更高级的威胁。”
The development comes as OpenAI unveiled a new internal Preparedness team to "track, evaluate, forecast, and protect" against catastrophic risks to generative AI spanning cybersecurity, chemical, biological, radiological, and nuclear (CBRN) threats.
此举发生在OpenAI成立了一个新的内部应急团队,以“跟踪、评估、预测和保护”免受生成AI涵盖的灾难性风险的威胁,包括网络安全、化学、生物、放射性和核(CBRN)威胁。
The two companies, alongside Anthropic and Microsoft, have also announced the creation of a $10 million AI Safety Fund, focused on promoting research in the field of AI safety.
这两家公司,与Anthropic和微软一起,还宣布创建了一项专注于促进AI安全领域研究的1000万美元AI安全基金。
推荐站内搜索:最好用的开发软件、免费开源系统、渗透测试工具云盘下载、最新渗透测试资料、最新黑客工具下载……
还没有评论,来说两句吧...