高智商与低智商的区别

Moravec’s paradox is the observation made by AI researchers that high-level reasoning requires less computation than low-level unconscious cognition. This is an empirical observation that goes against the notion that greater computational capability leads to more intelligent systems. Low level cognition appears to require more computational resources than higher level cognition. This is counter the common intuition that bigger brains leads to higher cognition.

中号 oravec悖论是人工智能研究人员的意见,认为高层次的推理需要比低层次的潜意识认知较少的计算。 这是一个经验性观察,与以下观点相反:更高的计算能力会导致更智能的系统。 与较高级别的认知相比,较低级别的认知似乎需要更多的计算资源。 这与大大脑导致更高认知的普遍直觉相反。

However, we have today computer systems that have super-human symbolic reasoning capabilities. Nobody is going to argue that a man with an abacus, a chess grandmaster or a champion Jeopardy player has any chance at beating a computer. Artificial symbolic reasoning is technology that has been available for decades now and this capability is without argument superior in capability than what any human can provide. Despite this, nobody will claim that computers are conscious.

但是,我们今天拥有的计算机系统具有超人的符号推理能力。 没有人会争辩说,有算盘,国际象棋大师或冠军危险玩家的人有可能击败计算机。 人工符号推理是一种已经使用了数十年的技术,这种能力毫无疑问比任何人都能提供的能力都要优越。 尽管如此,没有人会声称计算机是有意识的。

Today, with the discovery of deep learning (i.e. intuition machines), low-level unconscious cognition is within humanity’s grasp. Let me explore the ramifications of a hypothesis that subjectivity or self-awareness is discovered prior to the discovery of intelligent machines. This is a hypothesis assumes that self-awareness is not a higher reasoning capability.

如今,随着深度学习(即直觉机器)的发现,低级的无意识认知已成为人类的掌握范围。 让我探讨在发现智能机器之前先发现主观性或自我意识的假设的后果。 这是一个假设,即自我意识不是较高的推理能力。

Let us ask, what if self-aware machines were discovered before intelligent machines. What would the progression of breakthroughs look like? What is the order of these milestones? There is plenty of evidence in nature that simple subjective animals exist without any advanced reasoning capabilities. Let’s assume that it is true that simple subjective machines form the primitive foundations of cognition. How do we build smarter machines from simple subjective machines or machines with simple self models.?

让我们问一下,如果在智能机器之前发现了自我意识的机器会怎样? 突破的进展会是什么样? 这些里程碑的顺序是什么? 自然界中有大量证据表明存在简单的主观动物而没有任何高级推理能力。 让我们假设简单的主观机器确实是认知的原始基础。 我们如何从简单的主观机器或具有简单自我模型的机器构建更智能的机器?

It has been shown in simulations that ego-motion (i.e. bodily self-model) is an emergent property of a curiosity inspired algorithm. This emergence is then followed by the emergence of object detection (i.e. perspectival self-model) and object interaction (i.e volitional self-model). In other words, there is experimental evidence that the foundation to do object detection and interaction is via a self-awareness of where one’s body is situated with respect to space. This then drives the emergence of an agent’s awareness of perspective and then one’s awareness of agency.

在仿真中已经表明,自我运动(即身体自我模型)是好奇心启发算法的一种新兴特性。 然后出现该对象,然后出现对象检测(即,透视自我模型)和对象交互(即,自愿自我模型)。 换句话说,有实验证据表明,进行物体检测和交互作用的基础是通过自我感知自己的身体相对于空间的位置。 然后,这驱使了代理人的视角意识和个人代理意识的出现。

The process to achieve ego-motion also allows the reconstruction of 3d space given the image capture of viewpoints. Thus object detection is enhanced in that objects are recognized from different perspectives and objects that occlude one another are identified in their position in 3d space. Furthermore, to achieve 3d interaction with the object, a body needs to know where its articulator is relative to the objects that it can interact with. Therefore, in this experiment, the more computationally demanding task of ego-motion is a requirement to perform a less demanding capability.

给定视角的图像捕获,实现自我运动的过程还允许重建3d空间。 因此,通过从不同的角度识别对象,并在3d空间中的位置识别彼此阻塞的对象,从而增强了对象检测。 此外,为了实现与对象的3d交互,人体需要知道其咬合器相对于可以与之交互的对象的相对位置。 因此,在这个实验中,对自我运动的计算要求更高的任务是执行要求较低的能力的要求。

The common notion of the progression of intelligence is that higher level cognitive capabilities require more computation. Moravec’s paradox is a tell that this convention is untrue. The quest for consciousness is perhaps the first cognitive capability that needs to be discovered prior to general intelligence. It is incorrect to believe that it is the final goal.

智力发展的普遍观念是更高水平的认知能力需要更多的计算。 Moravec的悖论表明该约定是不正确的。 对意识的追求也许是在一般智力之前需要发现的第一个认知能力。 认为这是最终目标是不正确的。

Anil Seth enumerates five different kinds of self-models: bodily, perspectival, volitional, narrative and social selves. These selves are not orthogonal and perhaps partially ordered in what is a prerequisite over another. In his essay “The Real Problem”:

Anil Seth列举了五种不同的自我模型:身体,透视,意志,叙事和社会自我。 这些自我不是正交的,可能是部分先后顺序,这是另一个前提的先决条件。 在他的论文“ 真正的问题 ”中:

There is the bodily self, which is the experience of being a body and of having a particular body. There is the perspectival self, which is the experience of perceiving the world from a particular first-person point of view. The volitional self involves experiences of intention and of agency — of urges to do this or that, and of being the causes of things that happen. At higher levels, we encounter narrative and social selves.

存在 身体 自我,这是成为一个身体并拥有一个特定身体的经验。 有一个 透视 自我,这是从特定的第一人称视角观察世界的经验。 意志 的自我涉及意向和机构的经验-冲动做这样或那样的,和正在的事情发生的原因。 在更高层次上,我们遇到叙事和社会自我。

Anil Seth argues that the problem of understanding consciousness is less mysterious than it is made out to be. There is no hard problem of consciousness.

阿尼尔·塞思(Anil Seth)认为,了解意识的问题比事实证明的要神秘得多。 意识上没有硬性问题。

An AGI roadmap would, therefore, require generating all these five selves in the same order as described above. Autonomy, for example, can be achieved through learning the volitional self. This happens without the narrative self. That is, autonomy is a capability that is discovered prior to the capability of being able to tell stories or participate effectively in a social setting. This clearly makes intuitive sense. To achieve empathic conversational cognition, the narrative and social selves need to be present.

因此,AGI路线图将要求以与上述相同的顺序生成所有这五个自我。 例如,自主性可以通过学习自主意志来实现。 这是在没有叙事自我的情况下发生的。 也就是说,自主性是在能够讲故事或有效参与社交环境之前发现的一种能力。 这显然很直观。 为了获得移情对话的认知,叙事和社交自我都必须存在。

Brendan Lake described an AI roadmap toward “Building Machines that Learn and Think like People.” Lake proposes the development of the following capabilities: (1) build causal models of the world that support explanation and understanding (2) ground learning in intuitive theories of physics and psychology and (3) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. It is unclear if Lake has prescribed an order on which skill is a prerequisite of which other skill. I propose that we use the notion of self-models to prescribe an ordering for a roadmap for AGI.

布伦丹·莱克(Brendan Lake)描述了一项AI路线图,该路线图指向“ 构建像人一样学习和思考的机器 ”。 Lake建议开发以下功能:(1)建立支持解释和理解的世界因果模型(2)物理和心理学的直觉理论中的地面学习,以及(3)利用组成性和学习型知识来快速掌握并将知识推广到新的任务和情况。 目前尚不清楚莱克是否规定了一项命令,要求哪种技能是其他技能的先决条件。 我建议我们使用自模型的概念来规定AGI路线图的顺序。

Here is the consequence of various selves and the skills that are learned

这是各种自我和所学技能的结果

.

Note: Volitional should be below perspectival in an enactivist framework.

注意:在积极主义者的框架内,自愿性应低于透视。

In this formulation, all the world models involve the inclusion of self-models. These self-modes are all ‘Inside Out’ architectures. To understand compositionality, an agent needs an intuitive understanding of the body. To predict physics requires an intuitive awareness of where and what direction one is looking at when one makes an observation. To understand how to learn, one needs to know how to interact with the world.. To understand causality, one needs the capability of following stories. To understand psychology, one needs an understanding of oneself. In summary, you cannot develop any of the skills that Brendan Lake describes without a previous grounding with a model of the self. Self-models are a necessary requirement for the stepping stones of AGI.

在这种表述中,所有世界模型都包含自模型。 这些自模式都是“由内而外 ”的架构。 要了解组成,代理人需要对身体有直观的了解。 预测物理学需要直观地了解观察时观察的方向和方向。 要了解如何学习,就需要知道如何与世界互动。要了解因果关系,就需要具备跟踪故事的能力。 要了解心理学,就需要对自己的了解。 总之,没有事先建立自我模型的基础,您就无法开发Brendan Lake描述的任何技能。 自我模型是AGI垫脚石的必要要求。

AI orthodoxy does not include a notion of self-models in their models. I suspect this is due to either (1) the tradition of science to prefer an objective model of the world or (2) assumption that self-awareness is a higher level of cognitive capability. The second reason creates a bias that research on lower level cognition doesn’t need to take a self-model into account.

AI正统理论的模型中未包含自模型的概念。 我怀疑这是由于(1)科学的传统偏爱世界的客观模型,或者(2)自我意识是更高水平的认知能力的假设。 第二个原因造成了一种偏见,即对低级认知的研究不需要考虑自我模型。

A side effect of this model of intelligence based on self-models is that we can also use it to characterize the cognitive capabilities of existing life forms. Below is a radar map that compares an octopus, raven, dog, elephant, human and killer whale across five dimensions that are aligned with a specific self-model.

这种基于自我模型的智力模型的副作用是,我们还可以使用它来表征现有生命形式的认知能力。 下面是一个雷达图,比较了章鱼,乌鸦,狗,大象,人类和虎鲸的五个维度,这些维度与特定的自我模型保持一致。

This diagram is inspired by a research paper “Dimensions of Consciousness” that set out to classify different species according to six dimensions. The six dimensions described in this paper are selfhood, unity, temporality, p-richness (visual), p-richness (touch) and e-richness. The idea is to identify different qualities of consciousness and then identify the extent that a species aligns with these qualities.

该图的灵感来自研究论文“意识的维度”,该论文旨在根据六个维度对不同物种进行分类。 本文描述的六个维度是自我,统一性,时间性,p丰富度(视觉),p丰富度(触摸)和e丰富度。 这个想法是要确定不同的意识品质,然后确定物种与这些品质保持一致的程度。

I find it more informative to frame consciousness in terms of self-models. The reason for this is that living things are primarily driven by homeostasis. Antonio Damasio in “The Strange Order of Things” argues that the brain’s function at its core is driven by homeostasis. Damasio writes:

我发现根据自我模型来构架意识更为有益。 原因是生物主要由体内平衡驱动。 安东尼奥·达马西奥(Antonio Damasio)在《事物的奇怪秩序》中指出,大脑的核心功能是由体内平衡驱动的。 达马西奥写道:

Feelings are the mental expressions of homeostasis, acting under the cover of feeling, is the functional thread that links early life-forms to the extraordinary partnership of bodies and nervous systems.

感觉是动态平衡的心理表达,在感觉的掩护下起作用,是将早期生命形式与身体和神经系统的非凡伙伴联系起来的功能性线。

It ensures that life is regulated within a range that is not just compatible with survival but also conducive to flourishing.

它确保将生活调节到不仅与生存相称而且有利于繁华的范围内。

One could thus argue that the purpose of brains is the homeostasis of self-models. Thus we are speaking here about homeostasis at an abstract level rather than one that is biological. These self-models afterall are not biologically instantiated, but rather are created in the virtual world of the mind. Anil Seth would call these hallucinations. This is also the nature of an inside-out architecture.

因此,人们可能会认为,大脑的目的是自我模型的内在平衡。 因此,我们在这里谈论的是动态平衡而不是生物学的动态平衡。 这些自我模型毕竟不是生物学实例化的,而是在心灵的虚拟世界中创建的。 阿尼尔·塞思(Anil Seth)称这些为幻觉。 这也是由内而外的体系结构的本质。

Self-models are homeostatic processes that preserve qualities at different time scales. The agility of an organism to adapt a self-model to a variety of conditions defines a general intelligence in the context of a specific self-model. The natural tension between self-preservation and the need for agility leads to an adaptive cognitive capability. The aggregation of an organism’s self models each responsible for different homeostatic processes and each with a different level of agility corresponds to the holistic general intelligence of the individual.

自我模型是一种稳态过程,可以在不同的时间范围内保持质量。 有机体使自我模型适应各种条件的敏捷性定义了特定自我模型下的一般智力。 自我保护与敏捷需求之间的自然张力导致了自适应的认知能力。 有机体自我模型的集合各自负责不同的体内平衡过程,而每个个体具有不同的敏捷度,对应于个体的整体一般智力。

Species that become dominant due to a few self-model prowess do so at the expense of developing other self-models. As a consequence, cutoff their opportunities toward evolving to a more complex intelligence.

由于一些自模型能力而成为优势的物种这样做会以开发其他自模型为代价。 结果,切断了他们发展更复杂情报的机会。

There are many path divergences in human evolution that favored greater agility over more optimal functionality. Human jaws are considerably weaker than the great apes, this deficiency is a consequence of a mutation. However, the consequences of this mutation is that it leads to a more nimble jaw and thus the means for richer vocalization and eventually towards the development of language. The opposable thumb which we share a common ancestor with the great apes. The evolution line of the great apes abandoned this in favor of hands of stronger grip. The weaker human hand however had greater agility to create tools. Weaker hominids were forced to thrive in the savannah instead of the luscious jungles, this led to bipedalism and acute visual perception. Necessity is the mother of invention. Weakness leads to the need for alternative strategies and this can serendipitously lead to greater agility.

在人类进化过程中,存在许多路径差异,它们倾向于更大的敏捷性而不是更理想的功能。 人的颚比大猿弱得多,这种不足是突变的结果。 但是,这种突变的结果是,它导致下巴更加灵活,从而带来了更丰富的发声方式,并最终促进了语言的发展。 我们与大猿有共同祖先的相反的拇指。 大猩猩的进化系抛弃了这一点,转而握紧了手。 但是,较弱的人手具有更大的创建工具的灵活性。 较弱的原始人被迫在稀树草原而不是甜美的丛林中繁衍生息,这导致了两足动物和敏锐的视觉感知。 必要性是发明之母。 弱点导致需要替代策略,这可能会偶然导致更大的敏捷性。

The brain is driven by several homeostatic cognitive processes that seek to preserve virtual self-models. The reason why this is so is that the human brain has evolved to decouple functionality from physical implementation. This decoupling is most complete at the highest level of cognition. In other words, we expect to see higher coupling between functionality and physical circuitry in the bodily self-models. However, we should see a complete decoupling in the narrative and in the social self-models. The higher cognitive functions of the mind are software-like. The modular structure of the mini-columns of the isocortex is a tell of this possibility. The physical structure of the isocortex indicates the generality of these components. When we see components with uniform physical characteristics, we can only surmise that the actual computation is performed at an abstract level decoupled from the physical characteristics. It is this decoupling or virtualization of cognition in the human brain that led to its ultimate agility.

大脑由试图保持虚拟自我模型的几种稳态认知过程驱动。 之所以如此,是因为人脑已经进化为将功能与物理实现脱钩。 这种分离在最高的认知水平上是最完整的。 换句话说,我们期望在身体自模型中看到功能与物理电路之间的更高耦合。 但是,我们应该在叙述和社会自我模型中看到完全脱钩。 心灵的高级认知功能类似于软件。 等皮质小柱的模块化结构说明了这种可能性。 等皮质的物理结构表明这些组件的普遍性。 当我们看到具有统一物理特性的组件时,我们只能推测实际计算是在与物理特性分离的抽象级别上执行的。 人脑中认知的这种解耦或虚拟化导致其最终的敏捷性。

Cognition is a constraint satisfaction problem that involves the self, its context, and its goals. In fact, the distinction of inference from learning is likely a flawed bias. Inference happens to also be the same as learning, and both are constraint satisfaction problems. A self-model is what provides meaning, relevance makes the context explicit and agency motivates goals.

认知是一个约束满足问题,涉及到自我,其背景和目标。 实际上,推理与学习的区别可能是有缺陷的偏见。 推论也恰好与学习相同,两者都是约束满足问题。 自我模型提供了意义,相关性使上下文清晰,代理激励了目标。

An important point here is that the self, the context and the goal are all mental models. Although they may have corresponding real analogs, constraint satisfaction is achieved only with the approximate mental models that are ‘hallucinated’ by cognitive agents. These models are also not static and like a brain in a vat. Rather, agents are decoupled to their environments, agents change with their interaction with the environment.

这里重要的一点是,自我,语境和目标都是心理模型。 尽管它们可能具有相应的真实类似物,但约束满足只能通过认知代理“半透明”的近似心理模型来实现。 这些模型也不是静态的,就像桶中的大脑。 而是,代理与环境分离,代理随着与环境的交互而变化。

There is perhaps a synergy between the different selves such that some are prerequisites for others. The reason why the order of skills is extremely important is that the more abstract levels must have the grounding that can be found only in the lower levels. Furthermore, skills that are assumed to be context-free cannot be independent of the context of the self. If we assume Moravec’s paradox to be correct across all cognitive levels, then it is the unconscious bodily self-model (the instinctive level) that requires the greatest computational resources. This implies that contrary to popular consensus, it takes fewer and fewer resources as you move up cognitive levels. Said differently, it takes less and less effort to make exponential progress. This conclusion is very different from the more popular notion that it takes more and more computation to achieve artificial intelligence.

不同的自我之间可能存在协同作用,因此某些是其他自我的先决条件。 技能顺序之所以极为重要的原因是,抽象程度更高的层次必须具有只能在较低层次上才能找到的基础。 此外,被假定为无情境的技能不能独立于自我的情境。 如果我们假设Moravec的悖论在所有认知水平上都是正确的,那么无意识的身体自我模型(本能水平)就需要最大的计算资源。 这意味着与流行的共识相反,随着您提高认知水平,它占用的资源越来越少。 换句话说,取得指数进展需要花费越来越少的精力。 这个结论与更流行的观念非常不同,后者需要越来越多的计算来实现人工智能。

The reason that Anil Seth believes in the insurmountable problem of achieving AGI is that creating an artificial bodily self-model may be too difficult. He writes:

Anil Seth相信实现AGI的不可克服的问题的原因是,建立人为的身体自我模型可能太困难了。 他写:

“We are biological, flesh-and-blood animals whose conscious experiences are shaped at all levels by the biological mechanisms that keep us alive. Just making computers smarter is not going to make them sentient.”

“我们是生物的,有血有肉的动物,其意识经历在各个层面上都受到维持我们生命的生物学机制的影响。 仅使计算机更智能将不会使它们变得有知觉。”

The biggest hurdle is at the beginning (i.e. the bodily self-model). This kind of automation simply does not exist in today’s technology. So the acceleration only happens when this capability is achieved, meanwhile, AI innovation has been predominantly determined by the exploitation of brute force computational resources.

最大的障碍是一开始(即身体自我塑造)。 在今天的技术中根本不存在这种自动化。 因此,只有在达到此功能时才进行加速,与此同时,人工智能的创新主要取决于蛮力计算资源的开发。

I agree with Seth that more computer resources may not ignite into general intelligence. However, I hypothesize the plausibility that simple self-model machines may be that foundation that gets you to general intelligence. The uneasy reality is that this looks very much like a slippery slope. Creating bodily self-models and you just easily can slip into a ditch where you accidentally discover AGI.

我同意塞思的说法,更多的计算机资源可能不会点燃普通情报。 但是,我假设简单的自模型机器可能是使您获得一般智能的基础。 令人不安的现实是,这看上去很像一个湿滑的斜坡。 创建身体自我模型后,您很容易滑入不小心发现AGI的沟中。

An “inside out” architecture is key because it is what we have in the neocortex (aka isocortex). Today’s deep learning is more analogous to insect-like intelligence which is just capable of stimulus-response behavior.

“由内而外”的体系结构至关重要,因为它是我们在新皮层(也称为等皮层)中所拥有的。 如今的深度学习更类似于昆虫般的智力,该智力仅具有刺激响应行为的能力。

Can we build narrow slices of this cognitive stack and have the stack broaden out over time? For example, a bodily self-model that does not embody the entire sensor network that a human will have. Can we avoid an all or nothing situation and build this incrementally?

我们能否在认知堆栈中建立狭窄的切面,并随着时间的推移扩展堆栈? 例如,没有体现人类将拥有的整个传感器网络的身体自我模型。 我们能否避免全有或全无的情况并逐步建立呢?

The rough sketch for this is that the bodily self-model is developed by learning as an embodied entity in a simulated virtual world. It would have a subset of sensors that is proportional to what can be simulated in this virtual world. The objective is to learn the three lower-level self models (i.e. bodily, perceptive and agency).

粗略的草图是,身体的自我模型是通过学习作为模拟的虚拟世界中的体现实体而开发的。 它会有一个传感器子集,该子集与在此虚拟世界中可以模拟的内容成比例。 目的是学习三个较低级别的自我模型(即身体,感知和代理)。

These simulations are already being performed today. Once you see this develop with higher fidelity, then I think you’ll see a more rapid acceleration towards general intelligence. There is a tipping point here and that tipping point may be much closer than anyone may have imagined!

这些模拟已经在今天进行。 一旦您看到这种情况以更高的保真度发展,那么我认为您会看到向通用情报的更快加速。 这里有一个临界点,该临界点可能比任何人想象的都近得多!

翻译自: https://medium.com/intuitionmachine/homeostasis-and-a-definition-of-intelligence-62ac2b8a274f

高智商与低智商的区别


http://www.taodudu.cc/news/show-2735388.html

相关文章:

  • 计算机界的传奇人物:高纳德
  • 近视眼学计算机好吗,听说,近视的人智商更高?
  • 学计算机智商,IQ最高的十大专业公布,考验你们智商的时刻到了!
  • 测试智商多高的软件,测测你的智商多高 国际标准智商测试30题
  • 世界上智商最高的人
  • java需要高智商,心理学家:真正高智商的人,根本不需要朋友
  • 中国人的智商全球最高
  • 测试智商多高的软件,智商测试:测测你的智商多高
  • 渗透测试推荐书籍
  • 学习计算机组成原理课程,推荐书籍,写CPU(随做更新
  • 清华大学推荐:这32本书籍你看过几本?
  • ES6之推荐书籍
  • 图书推荐算法
  • Java后端学习,推荐书籍和学习路线
  • Linux入门推荐书籍
  • JavaWeb 图书推荐
  • 推荐一些学习嵌入式经典的书籍
  • 计算机专业初学者推荐书籍
  • Linux学习推荐书籍
  • Linux嵌入式开发必读推荐书籍
  • 计算机专业推荐书籍
  • C语言推荐书籍从入门到进阶带你走上大牛之路(珍藏版)
  • Android 开发推荐书籍
  • 计算机基础推荐书籍
  • 架构师推荐书籍 一
  • J2EE学习推荐书籍
  • Unity学习推荐书籍
  • 数学建模的推荐书籍
  • 计算机组成原理推荐书籍
  • plt 固定X轴、Y轴的范围 ax设置横纵坐标的范围 ax.set_ylim(ymin = 0, ymax = 130)ax.set_xlim(xmin = -5, xmax = 5)

高智商与低智商的区别_体内平衡与智力的定义相关推荐

  1. c++ 高通、低通、带通滤波器_射频/微波滤波器

    滤波器的基础是谐振电路,只要能构成谐振的电路组合就可以实现滤波器.滤波器有四个基本原型,低通.带通.带阻.高通.实现滤波器就是实现相应的谐振系统.纪总参数就是电感.电容,分布参数就是各种射频/微波传输 ...

  2. 威驰fs高配和低配有什么区别_“电子手刹”和“机械手刹”的区别有多大?很多车主不清楚...

    随着汽车行业逐步的发展,生产汽车的技术也越来越成熟,而相对应得,客户们对车辆的要求越来越多,需求的种类也越来越多.准车主们在选择车辆的时候,已经不仅仅是看重性价比了.因为现在有的车辆虽然从表面上来看, ...

  3. 【老孙随笔】年轻一代绝非低智商

    --读<低智商社会>by 大前研一 低智商社会,在大前研一眼中就是指智商降低,或者停止思考的状态.外在表象就是"笨蛋现象". 从表面上看,确实如此.人们的智商变低了,胡 ...

  4. 研究:低智商男人易出轨

    研究:低智商男人易出轨 Deceitful and despicable is one description that wronged wives could apply to their chea ...

  5. 第2件事 培养独立思考能力,对“产品低智商”说不

    1.日本战略之父大前研一的著作,诸如<M型社会><思考的技术><低智商社会><质问力>和<创新者的思考>等,对社会现象产生的原因分析见解独到 ...

  6. 《惢客创业日记》2019.08.18(周日)网络名词与低智商者的狂欢(三)

    用两篇日记也没有把这个主题写完,今天,做个总结吧.前两篇主要写自己的观察.分析和感受.今天,就在日记中记录一个人,但是,在文学领域,他可是长青树,不但当过文化部部长,还是一名老牌的作家.之所以提到他, ...

  7. 低智商的善良等于作恶

    作者:阿童木 感动了自己,搞死了别人.打着为你好的幌子,尽干些坑你的事. 1 看电影<老炮儿>的时候,我对灯泡这个角色是又爱又恨.爱是因为他真实:老实巴交,唯唯诺诺,在寒冷的冬天裹着臃肿而 ...

  8. python模块化设计耦合度_什么是程序设计中的高内聚、低耦合?

    开发者经常遇到一些项目,比如一个真格量化中的策略,要求较高的模块独立性.模块独立性指每个模块只完成系统要求的独立子功能,并且与其他模块的联系最少且接口简单.我们有两个定性的度量标准--耦合性和内聚性. ...

  9. 高内聚低耦合通俗理解_抱歉,请不要把“业务逻辑层”理解为“业务中台”

    在IAS2019中台架构峰会上,我曾与一位年轻帅气的技术小伙来了一番有趣的对话. 因为和朋友有约,所以我在现场互动结束之后,就急匆匆地跟其他嘉宾打了声招呼,抱着笔记本冲出了会场. 但没想到刚到电梯口, ...

最新文章

  1. python花萼长度表_Python 数据分析答疑 5:Pandas入门
  2. mongoose 更新元素 DeprecationWarning: collection.update is deprecated. Use updateOne, updateMany
  3. 获取数据库值,再在其值上做修改
  4. php 无法识别oci8,php 连接oracle 无法 启用oci8 解决办法 (摘自oracle官网)
  5. python修饰器执行步骤_Python修饰器学习总结
  6. java基础语法day03
  7. main函数的argc和argv
  8. 初涉网络流[EKdinic]
  9. Atitit java播放mp3 目录 1.1. 不能直接支持mp3播放。。需要解码播放转化为pcm 1 1.2. 使用\javalayer类库播放 3 1.3. ,就是普通的java sound
  10. 日报系统、周报系统如何便捷使用?——领导篇
  11. 通过Python对商品销售数据预测
  12. oracle常用函数详解
  13. 大数据时代移动营销的十大趋势
  14. python读取npy文件
  15. HTTP协议报文头部结构和Web相关工具
  16. R3Det: Refined Single-Stage Detector with Feature Refinement for Rotating Object
  17. 隐藏桌面和控制面板网络连接
  18. 【入坑Java第二天】
  19. 将m4s文件转为mp4
  20. 城市轨道交通联锁试验方法介绍

热门文章

  1. 最具影响力30人,托起移动医疗千万市场
  2. 互联网医疗泡沫破灭,一场从线上回归线下的技术圈地运动?
  3. 最近非常火的电子木鱼流量主小程序源码
  4. 青少年计算机等级测试内容,青少年人工智能技术水平测试一级等级考试介绍
  5. AutoCAD 2004-2022 官方简体中文版下载直链
  6. 世界上最具价值的10家公司!
  7. 实战项目:Boost搜索引擎
  8. Unity 打包IOS平台错误
  9. SpringBoot打开resources目录下的文件操作
  10. ADPRL - 近似动态规划和强化学习 - Note 1 - Introduction