Benedict Evans:Ways to think about AGI思考 AGI 的方法:

​Benedict Evans本文发布于2024 年 5 月 4 日


How do we think about a fundamentally unknown and unknowable risk, when the experts agree only that they have no idea?
当专家们一致认为他们一无所知时,我们如何看待根本上未知和不可知的风险?

The manuscript for ‘A Logic Named Joe’
《乔的逻辑》手稿

In 1946, my grandfather, writing as ‘Murray Leinster’, published a science fiction story called ‘A Logic Named Joe’. Everyone has a computer (a ‘logic’) connected to a global network that does everything from banking to newspapers and video calls. One day, one of these logics, ‘Joe’, starts giving helpful answers to any request, anywhere on the network: invent an undetectable poison, say, or suggest the best way to rob a bank. Panic ensues - ‘Check your censorship circuits!’ - until they work out what to unplug. (My other grandfather, meanwhile, was using computers to spy on the Germans, and then the Russians.)
1946 年,我的祖父以“Murray Leinster”的名义出版了一部科幻小说,名为《名为乔的逻辑》。每个人都有一台连接到全球网络的计算机(一个“逻辑”),可以执行从银行业务到报纸和视频通话的所有操作。有一天,这些逻辑之一“乔”开始对网络上任何地方的任何请求提供有用的答案:例如,发明一种无法检测到的毒药,或者提出抢劫银行的最佳方法。恐慌随之而来——“检查你的审查电路!”——直到他们弄清楚要拔掉什么。 (与此同时,我的另一位祖父正在使用计算机监视德国人,然后是俄罗斯人。)

For as long as we’ve thought about computers, we’ve wondered if they could make the jump from mere machines, shuffling punch-cards and databases, to some kind of ‘artificial intelligence’, and wondered what that would mean, and indeed, what we’re trying to say with the word ‘intelligence’. There’s an old joke that ‘AI’ is whatever doesn’t work yet, because once it works, people say ‘that’s not AI - it’s just software’. Calculators do super-human maths, and databases have super-human memory, but they can’t do anything else, and they don’t understand what they’re doing, any more than a dishwasher understands dishes, or a drill understands holes. A drill is just a machine, and databases are ‘super-human’ but they’re just software. Somehow, people have something different, and so, on some scale, do dogs, chimpanzees and octopuses and many other creatures. AI researchers have come to talk about this as ‘general intelligence’ and hence making it would be ‘artificial general intelligence’ - AGI.
自从我们思考计算机以来,我们就想知道它们是否能够从纯粹的机器、打孔卡和数据库跳跃到某种“人工智能”,并想知道这意味着什么,事实上, ,我们想用“智能”这个词来表达什么。有一个老笑话说,“人工智能”是指还没有发挥作用的东西,因为一旦它发挥作用,人们就会说“这不是人工智能——这只是软件”。计算器具有超人的数学能力,数据库具有超人的记忆力,但它们不能做任何其他事情,而且它们不明白自己在做什么,就像洗碗机了解盘子或钻头了解孔一样。钻机只是一台机器,数据库是“超人”,但它们只是软件。不知何故,人们有一些不同的东西,在某种程度上,狗、黑猩猩、章鱼和许多其他生物也是如此。人工智能研究人员开始将其称为“通用智能”,因此将其称为“通用人工智能”——AGI。

If we really could create something in software that was meaningfully equivalent to human intelligence, it should be obvious that this would be a very big deal. Can we make software that can reason, plan, and understand? At the very least, that would be a huge change in what we could automate, and as my grandfather and a thousand other science fiction writers have pointed out, it might mean a lot more.
如果我们真的能够在软件中创造出与人类智能同等的东西,那么显然这将是一件非常大的事情。我们能开发出能够推理、计划和理解的软件吗?至少,这将是我们可以实现自动化的巨大变化,正如我的祖父和其他一千位科幻小说作家所指出的那样,这可能意味着更多。

Every few decades since 1946, there’s been a wave of excitement that sometime like this might be close, each time followed by disappointment and an ‘AI Winter’, as the technology approach of the day slowed down and we realised that we needed an unknown number of unknown further breakthroughs. In 1970 the AI pioneer Marvin Minsky claimed that in “from three to eight years we will have a machine with the general intelligence of an average human being”, but each time we thought we had an approach that would produce that, it turned out that it was just more software (or just didn’t work).
自 1946 年以来,每隔几十年,就会出现一波兴奋的浪潮,有时这样的时刻可能即将到来,但每次随之而来的是失望和“人工智能冬天”,因为当时的技术进展速度放缓,我们意识到我们需要一个未知的数字未知的进一步突破。 1970 年,人工智能先驱马文·明斯基 (Marvin Minsky) 声称,“三到八年内,我们将拥有一台具有普通人类一般智能的机器”,但每次我们认为我们有一种方法可以实现这一目标时,结果却是:它只是更多的软件(或者只是不起作用)。

As we all know, the Large Language Models (LLMs) that took off 18 months ago have driven another such wave. Serious AI scientists who previously thought AGI was probably decades away now suggest that it might be much closer. At the extreme, the so-called ‘doomers’ argue there is a real risk of AGI emerging spontaneously from current research and that this could be a threat to humanity, and calling for urgent government action. Some of this comes from self-interested companies seeking barriers to competition (‘This is very dangerous and we are building it as fast as possible, but don’t let anyone else do it’), but plenty of it is sincere.  
众所周知,18个月前兴起的大型语言模型(LLMs)又掀起了另一波这样的浪潮。严肃的人工智能科学家以前认为通用人工智能可能还需要几十年的时间,现在他们认为它可能更近了。在极端情况下,所谓的“末日论者”认为,当前的研究确实存在自发出现通用人工智能的风险,这可能对人类构成威胁,并呼吁政府采取紧急行动。其中一些来自自私的公司寻求竞争壁垒(“这是非常危险的,我们正在尽快建造它,但不要让其他人这样做”),但很多都是真诚的。

(I should point out, incidentally, that the doomers’ ‘existential risk’ concern that an AGI might want to and be able to destroy or control humanity, or treat us as pets, is quite independent of more quotidian concerns about, for example, how governments will use AI for face recognition, or talking about AI bias, or AI deepfakes, and all the other ways that people will abuse AI or just screw up with it, just as they have with every other technology.)
(顺便说一句,我应该指出,末日论者的“存在风险”担心 AGI 可能想要并且能够摧毁或控制人类,或者将我们视为宠物,这与更常见的担忧完全无关,例如,政府将如何使用人工智能进行人脸识别,或谈论人工智能偏见,或人工智能深度假货,以及人们滥用人工智能或搞砸人工智能的所有其他方式,就像他们对待其他所有技术一样。)

However, for every expert that thinks that AGI might now be close, there’s another who doesn’t. There are some who think LLMs might scale all the way to AGI, and others who think, again, that we still need an unknown number of unknown further breakthroughs.
然而,对于每一位认为通用人工智能现在可能已经接近实现的专家来说,还有另一位专家不这么认为。有些人认为LLMs可能会一路扩展到通用人工智能,而另一些人则再次认为我们仍然需要未知数量的未知进一步突破。

More importantly, they would all agree that they don’t actually know. This is why I used terms like ‘might’ or ‘may’ - our first stop is an appeal to authority (often considered a logical fallacy, for what that’s worth), but the authorities tell us that they don’t know, and don’t agree.
更重要的是,他们都会同意他们实际上并不知道。这就是为什么我使用“可能”或“可能”等术语 - 我们的第一站是诉诸权威(通常被认为是逻辑谬误,因为它的价值),但权威告诉我们他们不知道,也不知道不同意。

They don’t know, either way, because we don’t have a coherent theoretical model of what general intelligence really is, nor why people seem to be better at it than dogs, nor how exactly people or dogs are different to crows or indeed octopuses. Equally, we don’t know why LLMs seem to work so well, and we don’t know how much they can improve. We know, at a basic and mechanical level, about neurons and tokens, but we don’t know why they work. We have many theories for parts of these, but we don’t know the system. Absent an appeal to religion, we don’t know of any reason why AGI cannot be created (it doesn’t appear to violate any law of physics), but we don’t know how to create it or what it is, except as a concept.
不管怎样,他们不知道,因为我们没有一个关于一般智力到底是什么的连贯的理论模型,也不知道为什么人似乎比狗更擅长,也不知道人或狗与乌鸦到底有什么不同。章鱼。同样,我们不知道为什么 LLMs 看起来效果这么好,也不知道它们可以改进多少。我们在基本和机械层面上了解神经元和令牌,但我们不知道它们为何起作用。对于其中的某些部分,我们有很多理论,但我们不了解这个系统。如果没有宗教诉求,我们不知道 AGI 不能被创建的任何原因(它似乎不违反任何物理定律),但我们不知道如何创建它或它是什么,除了一个概念。

And so, some experts look at the dramatic progress of LLMs and say ‘perhaps!’ and other say ‘perhaps, but probably not!’, and this is fundamentally an intuitive and instinctive assessment, not a scientific one.
因此,一些专家看到LLMs的巨大进展并说“也许!”而其他专家则说“也许,但可能不是!”,这从根本上来说是一种直观和本能的评估,而不是科学的评估。

Indeed, ‘AGI’ itself is a thought experiment, or, one could suggest, a place-holder. Hence, we have to be careful of circular definitions, and of defining something into existence, certainty or inevitably.
事实上,“AGI”本身就是一个思想实验,或者,有人可能认为,它是一个占位符。因此,我们必须小心循环定义,以及将某物定义为存在、确定性或不可避免。

If we start by defining AGI as something that is in effect a new life form, equal to people in ‘every’ way (barring some sense of physical form), even down to concepts like ‘awareness’, emotions and rights, and then presume that given access to more compute it would be far more intelligent (and that there even is a lot more spare compute available on earth), and presume that it could immediately break out of any controls, then that sounds dangerous, but really, you’ve just begged the question.
如果我们首先将 AGI 定义为实际上是一种新的生命形式,在“各个”方面与人平等(除了某种物理形式),甚至包括“意识”、情感和权利等概念,然后假设如果能够访问更多计算,它会更加智能(并且地球上甚至还有更多可用的备用计算),并且假设它可以立即突破任何控制,那么这听起来很危险,但实际上,你'我只是提出这个问题。

As Anselm demonstrated, if you define God as something that exists, then you’ve proved that God exists, but you won’t persuade anyone. Indeed, a lot of AGI conversations sound like the attempts by some theologians and philosophers of the past to deduce the nature of god by reasoning from first principles. The internal logic of your argument might be very strong (it took centuries for philosophers to work out why Anselm’s proof was invalid) but you cannot create knowledge like that.
正如安瑟姆所证明的,如果你将上帝定义为存在的东西,那么你就证明了上帝存在,但你无法说服任何人。事实上,很多 AGI 对话听起来就像过去一些神学家和哲学家试图通过第一原理推理来推断上帝的本质。你的论点的内部逻辑可能非常强大(哲学家花了几个世纪才弄清楚为什么安瑟姆的证明无效),但你不能像那样创造知识。

Equally, you can survey lots of AI scientists about how uncertain they feel, and produce a statistically accurate average of the result, but that doesn’t of itself create certainty, any more than surveying a statistically accurate sample of theologians would produce certainty as to the nature of god, or, perhaps, bundling enough sub-prime mortgages together can produce AAA bonds, another attempt to produce certainty by averaging uncertainty. One of the most basic fallacies in predicting tech is to say ‘people were wrong about X in the past so they must be wrong about Y now’, and the fact that leading AI scientists were wrong before absolutely does not tell us they’re wrong now, but it does tell us to hesitate. They can all be wrong at the same time.
同样,你可以调查大量人工智能科学家,了解他们的不确定性,并得出统计上准确的结果平均值,但这本身并不能产生确定性,就像调查统计上准确的神学家样本不会产生确定性一样上帝的本质,或者也许,将足够多的次级抵押贷款捆绑在一起可以产生 AAA 债券,这是通过平均不确定性来产生确定性的另一种尝试。预测技术的最基本的谬误之一是说“人们过去对 X 的看法是错误的,所以他们现在对 Y 的看法一定是错误的”,而领先的人工智能科学家以前错了这一事实绝对不能告诉我们他们错了现在,但它确实告诉我们要犹豫。他们可能同时都错了。

Meanwhile, how do you know that’s what general intelligence would be like? Isaiah Berlin once suggested that even presuming there is in principle a purpose to the universe, and that it is in principle discoverable, there’s no a priori reason why it must be interesting. ‘God’ might be real, and boring, and not care about us, and we don’t know what kind of AGI we would get. It might scale to 100x more intelligent than a person, or it might be much faster but no more intelligent (is intelligence ‘just’ about speed?). We might produce general intelligence that’s hugely useful but no more clever than a dog, which, after all, does have general intelligence, and, like databases or calculators, a super-human ability (scent). We don’t know. 
与此同时,你怎么知道这就是一般智力的样子?以赛亚·柏林曾经提出,即使假设宇宙原则上有一个目的,并且原则上它是可发现的,也没有先验的理由说明它一定是有趣的。 “上帝”可能是真实的,而且很无聊,并不关心我们,而且我们不知道我们会得到什么样的通用人工智能。它的智能可能比人高 100 倍,或者可能速度更快,但并没有变得更智能(智能“仅仅”与速度有关吗?)。我们可能会产生非常有用的通用智能,但并不比狗聪明,毕竟狗确实具有通用智能,并且像数据库或计算器一样,具有超人的能力(气味)。我们不知道。

Taking this one step further, as I listened to Mark Zuckerberg talking about Llama 3, it struck me that he talks about ‘general intelligence’ as something that will arrive in stages, with different modalities a little at at a time. Maybe people will point at the ‘general intelligence’ of Llama 6 or ChatGPT 7 and say “That’s not AGI, it’s just software!” We created the term AGI because AI came just to mean software, and perhaps ‘AGI’ will be the same, and we’'ll need to invent another term.
更进一步,当我听马克·扎克伯格谈论 Llama 3 时,我惊讶地发现他所说的“通用智能”将分阶段出现,每次会以不同的方式出现。也许人们会指着 Llama 6 或 ChatGPT 7 的“通用智能”说“这不是 AGI,这只是软件!”我们创造了“AGI”这个术语,因为人工智能只是意味着软件,也许“AGI”也是一样的,我们需要发明另一个术语。

This fundamental uncertainty, even at the level of what we’re talking about, is perhaps why all conversations about AGI seem to turn to analogies. If you can compare this to nuclear fission then you know what to expect, and you know what to do. But this isn’t fission, or a bioweapon, or a meteorite. This is software, that might or might not turn into AGI, that might or might not have certain characteristics, some of which might be bad, and we don’t know. And while a giant meteorite hitting the earth could only be bad, software and automation are tools, and over the last 200 years automation has sometimes been bad for humanity, but mostly it’s been a very good thing that we should want much more of.
这种根本性的不确定性,即使是在我们正在谈论的层面上,也许就是为什么所有关于通用人工智能的讨论似乎都转向了类比。如果你可以将其与核裂变进行比较,那么你就知道会发生什么,并且知道该怎么做。但这不是裂变,也不是生物武器,也不是陨石。这是一种软件,它可能会或可能不会变成通用人工智能,它可能有也可能没有某些特征,其中一些可能是不好的,而我们不知道。虽然巨大的陨石撞击地球只会带来坏事,但软件和自动化都是工具,在过去 200 年里,自动化有时对人类来说是坏事,但大多数情况下,它是一件非常好的事情,我们应该想要更多。

Hence, I’ve already used theology as an analogy, but my preferred analogy is the Apollo Program. We had a theory of gravity, and a theory of the engineering of rockets. We knew why rockets didn’t explode, and how to model the pressures in the combustion chamber, and what would happen if we made them 25% bigger. We knew why they went up, and how far they needed to go. You could have given the specifications for the Saturn rocket to Isaac Newton and he could have done the maths, at least in principle: this much weight, this much thrust, this much fuel… will it get there? We have no equivalents here. We don’t know why LLMs work, how big they can get, or how far they have to go. And yet, we keep making them bigger, and they do seem to be getting close. Will they get there? Maybe, yes!
因此,我已经用神学作为类比,但我更喜欢的类比是阿波罗计划。我们有重力理论和火箭工程理论。我们知道为什么火箭不会爆炸,如何对燃烧室中的压力进行建模,以及如果我们将它们增大 25% 会发生什么。我们知道他们为什么上升,以及他们需要走多远。你可以把土星火箭的规格交给艾萨克·牛顿,他可以做数学计算,至少在原则上:这么大的重量,这么大的推力,这么多的燃料……它能到达那里吗?我们这里没有类似的东西。我们不知道为什么 LLMs 有效,它们能达到多大,或者它们必须走多远。然而,我们不断地把它们做得更大,而且它们似乎确实越来越接近了。他们会到达那里吗?也许是吧!

On this theme, some people suggest that we are in the empirical stage of AI or AGI: we are building things and making observations without knowing why they work, and the theory can come later, a little as Galileo came before Newton (there’s an old English joke about a Frenchman who says ‘that’s all very well in practice, but does it work in theory’). Yet while we can, empirically, see the rocket going up, we don’t know how far away the moon is. We can’t plot people and ChatGPT on a chart and draw a line to say when one will reach the other, even just extrapolating the current rate of growth. 
在这个主题上,有些人认为我们正处于人工智能或通用人工智能的经验阶段:我们正在构建事物并进行观察,但不知道它们为什么起作用,而理论可以稍后出现,就像伽利略出现在牛顿之前一样(有一个古老的理论)一个关于一个法国人的英语笑话,他说“这在实践中一切都很好,但在理论上可行”)。然而,虽然我们可以凭经验看到火箭上升,但我们不知道月球距离有多远。我们无法将人和 ChatGPT 绘制在图表上,并画一条线来说明一个人何时会到达另一个人,即使只是推断当前的增长率。

All analogies have flaws, and the flaw in my analogy, of course, is that if the Apollo program went wrong the downside was not, even theoretically, the end of humanity. A little before my grandfather, here’s another magazine writer on unknown risks:
所有类比都有缺陷,当然,我的类比中的缺陷是,如果阿波罗计划出错,即使在理论上,其负面影响也不是人类的终结。在我祖父之前,这是另一位关于未知风险的杂志作家:


What then, is your preferred attitude to risks that are real but unknown?? Which thought experiment do you prefer? We can return to half-forgotten undergraduate philosophy (Pascals’s Wager! Anselm’s Proof!), but if you can’t know, do you worry, or shrug? How do we think about other risks? Meteorites are a poor analogy for AGI because we know they’re real, we know they could destroy mankind, and they have no benefits at all (unless they’re very very small). And yet, we’re not really looking for them.
那么,对于真实但未知的风险,您的首选态度是什么?你更喜欢哪个思想实验?我们可以回到半被遗忘的本科生哲学(帕斯卡的赌注!安瑟姆的证明!),但如果你不知道,你会担心,还是耸耸肩?我们如何看待其他风险?陨石对于通用人工智能来说是一个糟糕的类比,因为我们知道它们是真实的,我们知道它们可以毁灭人类,而且它们根本没有任何好处(除非它们非常非常小)。然而,我们并不是真的在寻找它们。

Presume, though, you decide the doomers are right: what can you do? The technology is in principle public. Open source models are proliferating. For now, LLMs need a lot of expensive chips (Nvidia sold $47.5bn in the last 12 months and can’t meet demand), but on a decade’s view the models will get more efficient and the chips will be everywhere. In the end, you can’t ban mathematics. On a scale of decades, it will happen anyway. If you must use analogies to nuclear fission, imagine if we discovered a way that anyone could build a bomb in their garage with household materials - good luck preventing that. (A doomer might respond that this answers the Fermi paradox: at a certain point every civilisation creates AGI and it turns them into paperclips.)
不过,假设你认为厄运者是对的:你能做什么?该技术原则上是公开的。开源模型正在激增。目前,LLMs需要大量昂贵的芯片(Nvidia 在过去 12 个月销售了 475 亿美元,无法满足需求),但从十年的角度来看,模型将变得更加高效,芯片将变得更加高效。到处。最后,你不能禁止数学。从几十年的范围来看,它无论如何都会发生。如果你必须用核裂变来类比,想象一下,如果我们发现了一种方法,任何人都可以用家用材料在车库里制造炸弹——祝你好运,避免这种情况发生。 (末日论者可能会回应说,这回答了费米悖论:在某个时刻,每个文明都创造了通用人工智能,并将它们变成了回形针。)

By default, though, this will follow all the other waves of AI, and become ‘just’ more software and more automation. Automation has always produced frictional pain, back to the Luddites, and the UK’s Post Office scandal reminds us that you don’t need AGI for software to ruin people’s lives. LLMs will produce more pain and more scandals, but life will go on. At least, that’s the answer I prefer myself.
不过,默认情况下,这将跟随所有其他人工智能浪潮,并“只是”更多的软件和更多的自动化。自动化总是会产生摩擦性的痛苦,回到勒德派,英国邮局丑闻提醒我们,你不需要通用人工智能来毁掉人们的生活。 LLMs会产生更多的痛苦和更多的丑闻,但生活还要继续。至少,这是我自己更喜欢的答案。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/630527.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

云服务器和物理机该怎样分别呢

随着网络的不断发展,服务器的类型也在以不同的方式更新。现在云服务器的兴起占据了很大一部分市场,物理机的市场份额受到了很大的冲击。物理机和云服务器有什么区别?如何选择适合自己需求的?虽然物理服务器和云服务器都是服务器&a…

如何部署TDE透明加密实现数据库免改造加密存储

安当TDE(透明数据加密)实现数据库加密的步骤主要包括以下几个部分: 准备安装环境:确保操作系统和数据库环境已经安装并配置好,同时确保具有足够的权限来安装和配置TDE透明加密组件。下载安装包:从官方网站…

银河麒麟V10桌面版分区分析

前言:本文只讨论gpt分区uefi引导形式 ,了解分区方案的目的是方便恢复,还原,扩容等,普通用户使用无需了解这些细节。 先回顾分析windows和ubuntu默认分区用做对比 1、windows11默认分区 win11分区,如上图&am…

如何去除字符串两侧的空白字符?

TRIM函数会去掉字符串左侧和右侧的空格,语法是:TRIM(字符串) excel中,TRIM函数能去掉字符串左侧和右侧的空格,它的ASCII码是32。 以下设定一个字符串组合,它的第一个字符中空格,最后一个字符是换行符 &q…

WSL2-Ubuntu(深度学习环境搭建)

1.在Windows的WSL2上安装Ubuntu 流程可参考:https://www.bilibili.com/video/BV1mX4y177dJ 注意:中间可能需要使用命令wsl --update更新一下wsl。 2.WSL数据迁移 按照下面流程:开始菜单->设置->应用->安装的应用->搜索“ubun…

网络安全法中关于网络信息的保护和监管,有哪些规定?

网络安全法作为我们数字时代的重要法律保障,对于网络信息的保护和监管有着明确且详细的规定。这些规定不仅体现了国家对于网络安全的重视,也为我们每个人在数字世界中提供了坚实的法律屏障。 首先,我们来看一个关于网络运营者主体责任的案例。…

软件工程期末复习(9)数据流图

数据流图 结构化分析方法: 结构化分析方法最初由Douglas Ross提出,由DeMarco推广, 由Ward和Mellor以及后来的Hatley和Pirbhai扩充,形成了今天的结构化分析方法的框架。 结构化分析方法的分析模型: 核心:数…

springboot实现文件防盗链设计

shigen坚持更新文章的博客写手,擅长Java、python、vue、shell等编程语言和各种应用程序、脚本的开发。记录成长,分享认知,留住感动。 个人IP:shigen 👋👋👋hello,伙伴们好久不见&…

IO系列(四) - RandomAccessFile 类解读

一、摘要 RandomAccessFile 类,也被称为随机访问文件类。 RandomAccessFile 可以说是 Java 体系中功能最为丰富的文件操作类,相比之前介绍的通过字节流或者字符流接口方式读写文件,RandomAccessFile 类可以跳转到文件的任意位置处进行读写数…

开源连锁收银系统哪个好

针对开源连锁收银系统的选择,商淘云是一个备受关注的候选。商淘云以其功能丰富、易于定制和稳定性等优势,吸引了众多企业和开发者的关注。下面将从四个方面探讨商淘云开源连锁收银系统的优势: 首先,商淘云提供了丰富的功能模块。作…

CNN卷积神经网络初学

1.为什么要学CNN 在传统神经网络中,我们要识别下图红色框中的图像时,我们很可能识别不出来,因为这六张图的位置都不通,计算机无法分辨出他们其实是一种形状或物体。 这是传统的神经网络图,通过权重调整神经元和神经元…

C++学习一(主要对cin的理解)

#include<iostream> int main() {int sum 0, value 0;//读取数据直到遇到文件尾&#xff0c;计算所有读入的值的和while (std::cin >> value){ //等价于sumsumvaluesum value;}std::cout << "Sum is :" << sum << std::endl;sum …

OPC-UA open62541 C++测试代码

初级代码游戏的专栏介绍与文章目录-CSDN博客 我的github&#xff1a;codetoys&#xff0c;所有代码都将会位于ctfc库中。已经放入库中我会指出在库中的位置。 这些代码大部分以Linux为目标但部分代码是纯C的&#xff0c;可以在任何平台上使用。 这是之前写的open62541测试代码…

机器人计算力矩控制

反馈线性化&#xff1a; 反馈线性化是一种控制系统设计方法&#xff0c;其目标是通过状态空间的坐标变换和控制变换&#xff0c;使得非线性系统的输入-状态映射或输入-输出映射反馈等价于线性系统。这样&#xff0c;就可以应用线性系统的控制理论来实现非线性系统的控制。在机…

【Redis】数据类型

Redis数据类型&#xff08;5 3 1&#xff09; 五种基本数据类型 String字符串 特点 二进制安全&#xff0c;可以包含任何数据&#xff0c;如数字&#xff0c;字符串&#xff0c;jpg图片或者序列化的对象 应用场景 缓存&#xff1a; redis作为缓存层&#xff0c;mysql做持…

Java项目:基于ssm框架实现的房屋租售网站管理系统(房屋租赁和房屋出售一体)(B/S架构+源码+数据库+毕业论文+开题+任务书)

一、项目简介 本项目是一套基于ssm框架实现的房屋租售网站管理系统 包含&#xff1a;项目源码、数据库脚本等&#xff0c;该项目附带全部源码可作为毕设使用。 项目都经过严格调试&#xff0c;eclipse或者idea 确保可以运行&#xff01; 该系统功能完善、界面美观、操作简单、…

2024年5月16日 十二生肖 今日运势

小运播报&#xff1a;2024年5月16日&#xff0c;星期四&#xff0c;农历四月初九 &#xff08;甲辰年己巳月庚辰日&#xff09;&#xff0c;法定工作日。 红榜生肖&#xff1a;猴、鼠、鸡 需要注意&#xff1a;牛、兔、狗 喜神方位&#xff1a;西北方 财神方位&#xff1a;…

Py深度学习基础|关于Batch Normalization

1. 为什么需要Batch Normalization 通常我们会在输入层进行数据的标准化处理&#xff0c;这是为了让模型学习到更好的特征。同样&#xff0c;在模型的中间层我们也可以进行normalize。在神经网络中, 数据分布对训练会产生影响。 比如我们使用tanh作为激活函数&#xff0c;当输入…

财富增长新途径:副业赚钱方法全攻略

探寻财富之路:多元化赚钱途径解析 在追求财富的道路上,每个人都在以自己的方式前行。然而,正如古人所云:“君子爱财,取之有道。”今天,我将为您揭示一些新颖且实用的赚钱途径,希望能为您的财富积累之路注入新的活力。 1、视频内容的创作与分享 在这个视频内容为王的时…

ROS 2边学边练(48)-- 将URDF与robot_state_publisher一起使用

前言 本篇将完成一个行走的机器人&#xff0c;并以tf2消息的方式实时发布机器人状态&#xff0c;以便我们在Rviz中同步查看。 首先&#xff0c;我们创建描述机器人装配的URDF模型。接下来&#xff0c;我们编写一个节点&#xff0c;用于模拟运动并发布JointState和位姿变换。然后…