跟TED演讲学英文:Your right to repair AI systems by Rumman Chowdhury

Your right to repair AI systems

在这里插入图片描述

Link: https://www.ted.com/talks/rumman_chowdhury_your_right_to_repair_ai_systems

Speaker: Rumman Chowdhury

Date: April 2024

文章目录

  • Your right to repair AI systems
    • Introduction
    • Vocabulary
    • Summary
    • Transcript
    • Afterword

Introduction

For AI to achieve its full potential, non-experts need to contribute to its development, says Rumman Chowdhury, CEO and cofounder of Humane Intelligence. She shares how the right-to-repair movement of consumer electronics provides a promising model for a path forward, with ways for everyone to report issues, patch updates or even retrain AI technologies.

人类智能公司(Humane Intelligence)首席执行官兼联合创始人鲁曼-乔杜里(Rumman Chowdhury)说,要想充分发挥人工智能的潜力,非专业人士必须为其发展做出贡献。她分享了消费电子产品的 "维修权 "运动如何为未来的道路提供了一个很有前景的模式,让每个人都有办法报告问题、更新补丁,甚至重新训练人工智能技术。

Vocabulary

crop yields:农作物产量

farmers would have to wait for weeks while their crops rot and pests took over. 农民们将不得不等待数周,直到他们的庄稼腐烂,害虫接管。

ride-share drivers 拼车司机

measle:美 ['mizl] 麻疹

mump:美 [mʌmp] 腮腺炎

other diseases like measles, mumps and the flu 麻疹、腮腺炎和流感等其他疾病

resoundingly:响亮地

specs:规格

could they imagine a modern AI system that would be able to design the specs of a modern art museum? The answer, resoundingly, was no. “他们能想象出一个能够设计现代艺术博物馆规格的现代AI系统吗?答案是响亮的‘不’。”

Now architects are liable if something goes wrong with their buildings. They could lose their license, they could be fined, they could even go to prison. 现在,如果建筑师的建筑出了问题,他们就要承担责任。他们可能会被吊销执照,被罚款,甚至会进监狱。

evacuation:美 [ɪˌvækjuˈeɪʃ(ə)n] 撤离;疏散;撤退

exit doors that open the wrong way, leading to people being crushed in an evacuation crisis 出口门打开方式错误,导致人们在疏散危机中被挤压

shatter:粉碎;破坏;破掉

the wind blows too hard and shatters windows. 风吹得太猛,吹碎了窗户。

agentic AI:代理式AI

tipping point:临界点;引爆点;爆发点;忍受极限

The next wave of artificial intelligence systems, called agentic AI, is a true tipping point between whether or not we retain human agency, or whether or not AI systems make our decisions for us. “下一波人工智能系统,被称为代理型AI,是一个真正的临界点,决定了我们是否保留人类自主权,还是让AI系统为我们做决定。”

medication:药物

a medical agent might determine whether or not your family needs doctor’s appointments, it might refill prescription medications, or in case of an emergency, send medical records to the hospital. 医疗代理可能会决定您的家人是否需要预约医生,可能会重新配药,或者在紧急情况下将医疗记录发送到医院。

What professional would trust an AI system with job decisions, unless you could retrain it the way you might a junior employee? “哪个专业人士会信任一个AI系统来做工作决策,除非你能够像培训一名初级员工那样对它进行再培训?”

这个句子might后面省略了动词,解释如下:

在这个句子里,“the way you might a junior employee” 确实省略了动词,但这是英语中一种常见的简略表达,称为“ellipsis”(省略)。完整的表达应该是:

“the way you might train a junior employee.”

省略动词“train”是为了避免重复,因为上下文已经很清楚地表明了动词的含义。这种省略让句子更加简洁,而不影响理解。

再举一个例子:

“If you treat the project the way you would a major client, it will succeed.”
(如果你像对待重要客户那样对待这个项目,它就会成功。)

完整的句子应该是:
“If you treat the project the way you would treat a major client, it will succeed.”

同样,第二个“treat”动词被省略了,因为上下文已经清楚地说明了动词的含义。

intrepid:美 [ɪnˈtrɛpəd] 勇敢的;无畏的;

Or you could be like these intrepid farmers and learn to program and fine-tune your own systems 或者你可以像这些勇敢的农民一样学习编程和微调自己的系统

Summary

Rumman Chowdhury, CEO and cofounder of Humane Intelligence, begins her talk by highlighting the intersection of artificial intelligence (AI) and farming technology. She discusses the advancements such as computer vision predicting crop yields and AI identifying pests. However, she notes the challenges faced by farmers, exemplified by the controversy over John Deere’s smart tractors which restricted farmers’ ability to repair their own equipment. This led to a movement called “right to repair,” advocating for the ability to repair one’s own technology, whether it’s tractors or household devices. Chowdhury emphasizes that this right should extend to AI systems to ensure that people can fix and trust the technologies they use.

Chowdhury then addresses the declining public confidence in AI, citing polls that show widespread concern about the technology’s impact. She explains that people feel alienated because their data is used without consent to create systems that affect their lives, and they lack a voice in how these systems are built. To bridge this gap, she proposes the concept of red teaming, a practice from cybersecurity where external experts test and find flaws in systems. She highlights successful examples of red-teaming exercises with scientists and architects, which led to improvements in AI models and demonstrated the need for AI systems that interact with and are trusted by users.

In her concluding remarks, Chowdhury emphasizes the importance of involving people in the AI development process to build trust and ensure the technology benefits everyone. She introduces the idea of a “right to repair” for AI, suggesting tools like diagnostics boards and collaborations with ethical hackers to allow users to understand and improve AI systems. Chowdhury stresses that the potential of AI can only be realized if developers and users work together. She calls for a shift in focus from merely building trustworthy AI to creating tools that empower people to make AI work for them, asserting that technologists alone cannot achieve this goal without public involvement.

Humane Intelligence的首席执行官兼联合创始人Rumman Chowdhury在她的演讲中首先强调了人工智能(AI)和农业技术的交叉点。她讨论了计算机视觉预测作物产量和AI识别害虫等进步。然而,她也指出农民面临的挑战,例如John Deere的智能拖拉机争议,该公司限制农民自行修理设备的能力。这导致了一个名为“维修权”的运动,倡导能够修理自己的技术设备,无论是拖拉机还是家用设备。Chowdhury强调,这种权利应扩展到AI系统,以确保人们能够修理和信任他们所使用的技术。

Chowdhury接着谈到了公众对AI信任度的下降,引用了一些调查显示广泛存在的对技术影响的担忧。她解释说,人们感到被疏远,因为他们的数据在未经同意的情况下被用来创建影响他们生活的系统,而且他们在这些系统的构建过程中没有发言权。为了弥合这一差距,她提出了“红队”(red teaming)的概念,这是一种来自网络安全领域的实践,外部专家会测试和找出系统中的漏洞。她强调了一些成功的红队演练案例,如与科学家和建筑师的合作,这些演练改善了AI模型并展示了需要与用户互动并受其信任的AI系统。

在她的总结发言中,Chowdhury强调了让人们参与AI开发过程的重要性,以建立信任并确保技术惠及所有人。她提出了AI的“维修权”概念,建议使用诊断板和与道德黑客的合作,让用户理解和改进AI系统。Chowdhury强调,只有在开发者和用户共同努力的情况下,AI的潜力才能得以实现。她呼吁从仅仅构建可信的AI转向创建能让人们使AI为他们服务的工具,坚称仅靠技术人员无法实现这一目标,必须有公众的参与。

Transcript

I want to tell you a story

about artificial intelligence and farmers.

Now, what a strange combination, right?

Two topics could not sound
more different from each other.

But did you know that modern farming
actually involves a lot of technology?

So computer vision is used
to predict crop yields.

And artificial intelligence
is used to find,

identify and get rid of insects.

Predictive analytics helps figure out
extreme weather conditions

like drought or hurricanes.

But this technology
is also alienating to farmers.

And this all came to a head in 2017

with the tractor company John Deere
when they introduced smart tractors.

So before then,
if a farmer’s tractor broke,

they could just repair it themselves
or take it to a mechanic.

Well, the company actually made it illegal

for farmers to fix their own equipment.

You had to use a licensed technician

and farmers would have to wait for weeks

while their crops rot and pests took over.

So they took matters into their own hands.

Some of them learned to program,

and they worked with hackers to create
patches to repair their own systems.

In 2022,

at one of the largest hacker
conferences in the world, DEFCON,

a hacker named Sick Codes and his team

showed everybody how to break
into a John Deere tractor,

showing that, first of all,
the technology was vulnerable,

but also that you can and should
own your own equipment.

To be clear, this is illegal,

but there are people
trying to change that.

Now that movement is called
the “right to repair.”

The right to repair
goes something like this.

If you own a piece of technology,

it could be a tractor, a smart toothbrush,

a washing machine,

you should have the right
to repair it if it breaks.

So why am I telling you this story?

The right to repair needs to extend
to artificial intelligence.

Now it seems like every week

there is a new and mind-blowing
innovation in AI.

But did you know that public confidence
is actually declining?

A recent Pew poll showed
that more Americans are concerned

than they are excited
about the technology.

This is echoed throughout the world.

The World Risk Poll shows

that respondents from Central
and South America and Africa

all said that they felt AI would lead
to more harm than good for their people.

As a social scientist and an AI developer,

this frustrates me.

I’m a tech optimist

because I truly believe
this technology can lead to good.

So what’s the disconnect?

Well, I’ve talked to hundreds
of people over the last few years.

Architects and scientists,
journalists and photographers,

ride-share drivers and doctors,

and they all say the same thing.

People feel like an afterthought.

They all know that their data is harvested
often without their permission

to create these sophisticated systems.

They know that these systems
are determining their life opportunities.

They also know that nobody
ever bothered to ask them

how the system should be built,

and they certainly have no idea
where to go if something goes wrong.

We may not own AI systems,

but they are slowly dominating our lives.

We need a better feedback loop

between the people
who are making these systems,

and the people who are best
determined to tell us

how these AI systems
should interact in their world.

One step towards this
is a process called red teaming.

Now, red teaming is a practice
that was started in the military,

and it’s used in cybersecurity.

In a traditional red-teaming exercise,

external experts are brought in
to break into a system,

sort of like what Sick Codes did
with tractors, but legal.

So red teaming acts as a way
of testing your defenses

and when you can figure out
where something will go wrong,

you can figure out how to fix it.

But when AI systems go rogue,

it’s more than just a hacker breaking in.

The model could malfunction
or misrepresent reality.

So, for example, not too long ago,

we saw an AI system attempting diversity

by showing historically inaccurate photos.

Anybody with a basic
understanding of Western history

could have told you
that neither the Founding Fathers

nor Nazi-era soldiers
would have been Black.

In that case, who qualifies as an expert?

You.

I’m working with thousands of people
all around the world

on large and small red-teaming exercises,

and through them we found
and fixed mistakes in AI models.

We also work with some of the biggest
tech companies in the world:

OpenAI, Meta, Anthropic, Google.

And through this, we’ve made models
work better for more people.

Here’s a bit of what we’ve learned.

We partnered with the Royal Society
in London to do a scientific,

mis- and disinformation event
with disease scientists.

What these scientists found

is that AI models actually had
a lot of protections

against COVID misinformation.

But for other diseases like measles,
mumps and the flu,

the same protections didn’t apply.

We reported these changes,

they’re fixed and now
we are all better protected

against scientific mis-
and disinformation.

We did a really similar exercise
with architects at Autodesk University,

and we asked them a simple question:

Will AI put them out of a job?

Or more specifically,

could they imagine a modern AI system

that would be able to design the specs
of a modern art museum?

The answer, resoundingly, was no.

Here’s why, architects do more
than just draw buildings.

They have to understand physics
and material science.

They have to know building codes,

and they have to do that

while making something
that evokes emotion.

What the architects wanted
was an AI system

that interacted with them,
that would give them feedback,

maybe proactively offer
design recommendations.

And today’s AI systems,
not quite there yet.

But those are technical problems.

People building AI are incredibly smart,

and maybe they could solve
all that in a few years.

But that wasn’t their biggest concern.

Their biggest concern was trust.

Now architects are liable if something
goes wrong with their buildings.

They could lose their license,

they could be fined,
they could even go to prison.

And failures can happen
in a million different ways.

For example, exit doors
that open the wrong way,

leading to people being crushed
in an evacuation crisis,

or broken glass raining down
onto pedestrians in the street

because the wind blows too hard
and shatters windows.

So why would an architect trust
an AI system with their job,

with their literal freedom,

if they couldn’t go in
and fix a mistake if they found it?

So we need to figure out these problems
today, and I’ll tell you why.

The next wave of artificial intelligence
systems, called agentic AI,

is a true tipping point

between whether or not
we retain human agency,

or whether or not AI systems
make our decisions for us.

Imagine an AI agent as kind of
like a personal assistant.

So, for example,
a medical agent might determine

whether or not your family needs
doctor’s appointments,

it might refill prescription medications,
or in case of an emergency,

send medical records to the hospital.

But AI agents can’t and won’t exist

unless we have a true right to repair.

What parent would trust
their child’s health to an AI system

unless you could run
some basic diagnostics?

What professional would trust
an AI system with job decisions,

unless you could retrain it
the way you might a junior employee?

Now, a right to repair
might look something like this.

You could have a diagnostics board

where you run basic tests that you design,

and if something’s wrong,
you could report it to the company

and hear back when it’s fixed.

Or you could work with third parties
like ethical hackers

who make patches for systems
like we do today.

You can download them and use them
to improve your system

the way you want it to be improved.

Or you could be like these intrepid
farmers and learn to program

and fine-tune your own systems.

We won’t achieve the promised benefits
of artificial intelligence

unless we figure out how to bring people
into the development process.

I’ve dedicated my career
to responsible AI,

and in that field we ask the question,

what can companies build
to ensure that people trust AI?

Now, through these red-teaming exercises,
and by talking to you,

I’ve come to realize that we’ve been
asking the wrong question all along.

What we should have been asking
is what tools can we build

so people can make AI beneficial for them?

Technologists can’t do it alone.

We can only do it with you.

Thank you.

(Applause)

Afterword

2024年6月6日18点01分于上海。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/684690.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

PbootCms微信小程序官网模版/企业官网/社交电商官网/网络工作室/软件公司官网

在数字化时代,企业网站已成为吸引潜在客户、提升企业形象、和扩大品牌影响力的必备工具。因此,一个优秀的企业网站模板显得尤为重要。 企业官网的内容框架通常都包含企业形象、产品或服务类型、信息展示等部分,设计师需要借助和企业形象契合…

大模型的竞争格局与产品经理的未来机遇

前 言 作为产品经理,很重要的一点是要紧跟技术发展的潮流。大型语言模型(LLM)的竞争格局日新月异,谁会成为最终的赢家尚未可知。在这篇博文中,我们将介绍我们的一些重要观察发现,主要涉及直接面向消费者的…

2024年华为OD机试真题-多段线数据压缩-C++-OD统一考试(C卷D卷)

2024年OD统一考试(D卷)完整题库:华为OD机试2024年最新题库(Python、JAVA、C++合集)​ 题目描述: 下图中,每个方块代表一个像素,每个像素用其行号和列号表示。 为简化处理,多段线的走向只能是水平、竖直、斜向45度。 上图中的多段线可以用下面的坐标串表示:(2, 8), (3…

T-Pot多功能蜜罐实践@debian12@FreeBSD

T-Pot介绍 T-Pot是一个集所有功能于一身的、可选择分布式的多构架(amd64,arm64)蜜罐平台,支持20多个蜜罐和很多可视化选项,使用弹性堆栈、动画实时攻击地图和许多安全工具来进一步改善欺骗体验。GitHub - telekom-sec…

Linux Kernel nf_tables 本地权限提升漏洞(CVE-2024-1086)

文章目录 前言声明一、netfilter介绍二、漏洞成因三、漏洞危害四、影响范围五、漏洞复现六、修复方案临时解决方案升级修复方案 前言 2024年1月,各Linux发行版官方发布漏洞公告,修复了一个 netfilter:nf_tables 模块中的释放后重用漏洞(CVE-…

若尔盖草原亲子研学营 | 八月份开启

若尔盖草原亲子研学营,追寻父辈记忆,探索绿色圣境 Following the Footsteps of our Ancestors, Exploring the Maganificent Grassland . Parent -Child Summer Camp 身处繁忙的城市生活中 您是否曾在梦中追寻父亲的足迹 渴望重温他在草原上自由驰骋的…

PPINtonus (深度学习音调分析)帕金森病早期检测系统

帕金森病(Parkinson’s Disease,简称PD)是一种主要影响运动功能的进行性神经退行性疾病。这种疾病主要是由于大脑中一个名为黑质(substantia nigra)的区域失去产生多巴胺的神经元而引起的。PD的主要运动症状包括震颤、…

可视化数据科学平台在信贷领域应用系列三:特征组合

现代各企业都提倡“降本增效”,所以越来越多优秀的工具诞生了。若想在特征加工这块工作上提升效率,建模人员也能有更多时间“偷懒”,都 “Sora”时代了,为啥不巧用工具呢?RapidMiner在信贷风控特征加工组合中是一把利器…

【Vue】普通组件的注册使用-全局注册

文章目录 一、使用步骤二、练习 一、使用步骤 步骤 创建.vue组件&#xff08;三个组成部分&#xff09;main.js中进行全局注册 使用方式 当成HTML标签直接使用 <组件名></组件名> 注意 组件名规范 —> 大驼峰命名法&#xff0c; 如 HmHeader 技巧&#xf…

无人机推流/RTMP视频推拉流EasyDSS无法卸载软件是什么原因?

视频推拉流/直播点播EasyDSS平台支持音视频采集、视频推拉流、播放H.265编码视频、存储、分发等视频能力服务&#xff0c;在应用场景中可实现视频直播、点播、转码、管理、录像、检索、时移回看等。此外&#xff0c;平台还支持用户自行上传视频文件&#xff0c;也可将上传的点播…

【文件导出3】导出xml格式文件数据

导出xml格式数据 文章目录 导出xml格式数据前言一、实现代码1.controller层2.接口层3.接口实现类4.XmlUtil 工具类 二、文件导出效果总结 前言 springBoot项目实现在线导出xml格式文件数据的功能。 一、实现代码 1.controller层 GetMapping("/record/_export") Ap…

性能工具之 JMeter 常用组件介绍(三)

文章目录 一、常用组件介绍二、Sampler&#xff1a;取样器三、Controller:控制器&#xff08;逻辑控制器&#xff09;四、Pre Processor:预处理五、Post Processor:请求之后的处理六、Assertions:断言七、Timer:定时器八、Test Fragment&#xff1a;片段九、Config Element:配置…

九大微服务监控工具详解

Prometheus Prometheus 是一个开源的系统监控、和报警工具包&#xff0c;Prometheus 被设计用来监控“微服务架构”。 主要解决&#xff1a; 监控和告警&#xff1a;Prometheus 可以对系统、和应用程序进行实时监控&#xff0c;并在出现问题时发送告警&#xff1b;数据收集和…

前端将xlsx转成json

第一种方式&#xff0c;用js方式 1.1先安装插件 万事都离不开插件的支持首先要安装两个插件 1.2. 安装xlsx cnpm install xlsx --save注&#xff1a;这块我用的cnpm&#xff0c;原生的是npm&#xff0c;因为镜像的问题安装了cnpm&#xff0c;至于怎么装网上一搜一大堆 1.3安…

eNSP学习——配置RIP的版本兼容、定时器和协议优先级

目录 主要命令 原理概述 实验内容 实验拓扑 实验目的 实验编址 实验步骤 1、基本配置 2、配置RIP协议的版本兼容 3、配置RIP的定时器 4&#xff0e;配置RIP协议优先级 需要eNSP各种配置命令的点击链接自取&#xff1a;华为&#xff45;NSP各种设备配置命令大全PDF版…

Android 蓝牙概述

一、什么是蓝牙 蓝牙是一种短距离&#xff08;一般10m内&#xff09;无线通信技术。蓝牙技术允许固定和移动设备在不需要电缆的情况下进行通信和数据传输。 “蓝牙”这名称来自10世纪的丹麦国王哈拉尔德(Harald Gormsson)的外号。出身海盗家庭的哈拉尔德统一了北欧四分五裂的国…

揭秘智能测径仪省钱之道!每年能为每条产线省上百万!

在当今竞争激烈的市场环境下&#xff0c;企业们都在不断寻求提高生产效率、降低成本的方法。而智能测径仪的出现&#xff0c;为圆形钢材、螺纹钢等生产企业实现这一目标提供了有力的支持。 智能测径仪被广泛应用于高线、铸管、圆钢、螺纹钢、钢筋等的轧制生产线中&#xff0c;进…

计算机msvcr120.dll丢失怎样修复,一招搞定msvcr120.dll丢失问题

在计算机使用过程中&#xff0c;我们经常会遇到一些错误提示&#xff0c;其中之一就是“计算机缺失msvcr120.dll”。那么&#xff0c;这个错误是什么意思呢&#xff1f;又该如何解决呢&#xff1f;本文将从以下几个方面进行详细解析。 一&#xff0c;了解msvcr120.dll文件 msv…

Flutter基础 -- Flutter布局练习(小项目)

目录 1. Splash 布局&#xff08;第一页&#xff09; 1.1 目标 1.2 当前效果图 1.3 创建 Splash 界面 1.4 设置 MaterialApp 1.5 设置 Splash 背景色 1.6 布局 Splash 界面 1.7 总结 2. Splash 圆角图片 2.1 目标 2.2 当前效果图 2.3 蓝湖下载图片 2.4 图片导入项…

5G发牌五周年丨移远通信:全面发力,加快推进5G技术服务社会发展

2024年6月6日&#xff0c;正值中国5G商用牌照发牌五周年。根据移动通信“十年一代”的规律&#xff0c;5G已走过一半征程。在过去的五年时间里&#xff0c;5G技术从萌芽到成熟&#xff0c;深刻改变了工业、农业、医疗及消费端等各个领域的发展脉络。无论是无人机配送、自动驾驶…