行为经济学

In 1975 Herbert A. Simon was awarded the Turing Award by the Association for Computing Machinery. This award, given to “an individual selected for contributions of a technical nature made to the computing community” is considered to be the Nobel Prize for computing.

1975年,司马贺被授予图灵奖由美国计算机协会。 该奖项授予“因对计算机界做出技术性贡献而被选中的个人”,被视为诺贝尔计算机奖。

Simon and co-recipient Allen Newell made basic contributions to artificial intelligence, the psychology of human cognition, and list processing.

西蒙(Simon)和共同接受者艾伦·纽厄尔(Allen Newell)对人工智能,人类认知心理学和列表处理做出了基本贡献。

It is interesting to note that amongst his contributions to artificial intelligence and list processing, he is also being recognised for his contribution to human cognition. At first glance, one would think that understanding how humans think is about as far from computer science as you can get!

有趣的是,在他对人工智能和列表处理的贡献中,他还因其对人类认知的贡献而受到认可。 乍一看,人们会认为了解人类的想法与您所能获得的计算机科学相距甚远!

However, there are two key arguments that explain why human cognition is important for any advancements in computer science, and especially AI.

但是,有两个关键的论点可以解释为什么人类认知对于计算机科学特别是AI的任何进步都很重要。

模仿人类 (Imitating Humans)

In his 1950 seminal paper “Computing Machinery and Intelligence,” Alan Turing introduced what became known as the Turing test. A computer and a human have a written dialogue and in this “imitation game” the computer tries to fool the human participant into thinking it is also a human by devising responses that it thinks a human would make.

艾伦·图灵(Alan Turing)在其1950年的开创性论文 “计算机与智能计算”中介绍了后来被称为图灵测试的方法。 一台计算机和一个人进行书面对话,并且在此“模仿游戏”中,计算机试图通过设计认为一个人会做出的回应来欺骗该参与者以为它也是一个人。

One of the key aims of AI is to train computers to make decisions like humans, whether labelling pictures or responding to questions. Even if the aim is task-specific, not centred around replicating humans in their entirety, it is crucial that developers of AI have some understanding of human cognition, so that they can replicate it.

人工智能的主要目标之一是训练计算机做出像人一样的决策,无论是贴标签还是回答问题。 即使目标是针对特定任务的,而不是围绕整个人类的复制,至关重要的是,AI开发人员必须对人类认知有所了解,以便他们可以复制人类。

人际交往 (Human Interaction)

One of the many modern applications of AI, specifically machine learning, is in human-facing interaction. Whether recommending products to drive sales or auto-completing sentences in emails machine learning models are trained to understand what users want. However, the methods, data and metrics used to develop these models need to be provided with an understanding of how the model output will interact with the human users.

AI的许多现代应用之一,特别是机器学习,是面向人的交互。 无论是推荐产品以促进销售,还是在电子邮件中自动完成句子,机器学习模型都经过训练以了解用户的需求。 但是,需要提供用于开发这些模型的方法,数据和度量,并应了解模型输出将如何与人类用户交互。

Photo by Cytonn Photography on Unsplash
Cytonn Photography在Unsplash上拍摄的照片

In this article, we’ll focus on the interaction between humans and computer models, understanding how behavioral economics can be used to help data scientists develop and train more effective machine learning models.

在本文中,我们将专注于人与计算机模型之间的交互,了解如何使用行为经济学来帮助数据科学家开发和训练更有效的机器学习模型。

什么是行为经济学? (What is behavioral economics?)

Classical economics is based on the assumption that all individuals behave rationally, i.e. they will make the decision with the greatest personal utility (benefit).

古典经济学是基于所有个人理性行事的假设,即他们将以最大的个人效用(收益)做出决定。

However, modern economists began to realize that humans often behave irrationally. Not only that, but they are predictably irrational, behaving in the same irrational way every time they make similar decisions. Behavioral economics is the study of these predictably irrational decisions, known as cognitive biases.

但是,现代经济学家开始意识到人类的行为常常不合理。 不仅如此,而且他们可以预见的是不合理的,每次做出类似的决定时,它们的行为都是不合理的。 行为经济学是对这些可预测的非理性决策的研究,被称为认知偏差 。

Therefore, somewhere between phycology and economics, behavioral economists try to identify and measure, through experimentation, these systematic deviations from rational behavior and identify them in the real world.

因此,在经济学和经济学之间的某个地方,行为经济学家试图通过实验来识别和衡量这些与理性行为的系统偏差,并在现实世界中进行识别。

Daniel Kahneman and Amos Tversky, widely considered to be the founding fathers of the field, wrote extensively on the practical implications of cognitive bias is various fields, including finance, clinical judgment and management.

丹尼尔·卡尼曼(Daniel Kahneman)和阿莫斯·特维尔斯基(Amos Tversky)被广泛认为是该领域的创始者,他们广泛地论述了认知偏差在金融,临床判断和管理等各个领域的实际意义。

Tingey Injury Law Firm on 廷吉伤害律师事务所的不Unsplash飞溅照片

There are several types of cognitive biases that data scientists can use to improve the efficacy of their machine learning models.

数据科学家可以使用几种类型的认知偏差来提高其机器学习模型的效率。

确认偏差 (Confirmation Bias)

Confirmation bias is the tendency for humans to search for information that confirms one’s prior beliefs. This occurs because people naturally cherry-pick information that aligns with what they already believe is true.

确认偏见是人类搜索确认一个人先前信念的信息的趋势。 发生这种情况是因为人们自然选择了与他们已经相信的事实相符的信息。

As an extreme example, if you believe that the world is flat you will search extensively for evidence, no matter how scarce or unreliable, that supports your hypothesis, and ignore the widely available and reliable evidence against it.

举一个极端的例子,如果您认为世界是平坦的,那么您将在广泛的证据中寻找支持您的假设的证据,无论该证据多么稀缺或不可靠,而忽略了广泛可用的可靠证据。

Although he does not call it “confirmation bias,” one of the earliest experimental examples of this was by Peter Wason in 1960.

尽管他不称其为“确认偏见”,但最早的实验例子之一是彼得·沃森(Peter Wason)在1960年提出的。

In his experiment, he challenged subjects to identify the rule relating to three sequential numbers, normally [2, 4, 6]. To try and learn the rule, they were allowed to generate any set of three numbers and the experimenter would tell them whether or not it fit the rule.

在他的实验中,他向受试者挑战以找出与三个连续数字有关的规则,通常是[2,4,6]。 为了尝试学习该规则,允许他们生成任何三个数字的集合,然后实验人员会告诉他们是否符合该规则。

Wason found that most subjects devised extremely complex rules and generated many triplets that conformed to the rule. This is a poor tactic considering you cannot prove a rule definitively no matter how many combinations the experimenter confirms, but you can disprove it with just one. The rule was simply a sequence in ascending order, and only 6 out of 29 subjects identified it on their first guess.

沃森发现,大多数受试者设计了极其复杂的规则,并生成了许多符合该规则的三胞胎。 考虑到无论实验者确认多少组合,您都无法确定性地证明一条规则,但这是一个很差的策略,但是您只能用一个组合来反驳它。 该规则只是按升序排列的序列,在29位受试者中,只有6位在第一次猜测时就识别出了该规则。

In his 2011 Ted Talk, Eli Pariser talks about what he calls the “filter bubble”, the internet phenomenon where users are shown only what is most relevant to them. This is generally done using a recommender system method called collaborative filtering where users are recommended items based on what other people similar to them have interacted with (I’ll use interacted as a generic term for clicked, watched, bought, etc.).

Eli Pariser在其2011年的Ted Talk中 ,谈到了他所谓的“过滤器泡沫”,即互联网现象,在其中仅向用户显示与他们最相关的内容。 通常使用称为协作过滤的推荐器系统方法来完成此操作,在该方法中,根据与用户相似的其他人与之交互的内容来推荐用户(我将使用“交互”作为单击,观看,购买等的通用术语)。

The result of this is that you are shown more of what you have already interacted with. If you generally read conservative-leaning news articles, you’ll be shown more conservative-leaning news articles; if you watch action movies, you’ll be recommended more action movies.

这样的结果是向您显示了更多已交互的内容。 如果您通常阅读倾向于保守的新闻文章,则会看到更多倾向于保守的新闻文章; 如果您观看动作片,则将推荐您观看更多动作片。

However, Pariser points out that this isolates people from a variety of information and opinions, by trapping them in their filter bubble, without them even knowing. This reinforces confirmation bias, because not only is the user only searching for information confirms their beliefs, but it’s all they have available to them.

然而,巴黎人指出,这使人们陷入了他们的过滤泡中,甚至使他们甚至不知道,从而使人们远离各种信息和观点。 这加剧了确认偏见,因为不仅用户仅搜索信息即可确认其信念,而且这是他们可以使用的全部信息。

Photo by Lanju Fotografie on Unsplash
Lanju Fotografie在Unsplash上的照片

There are two main issues with this. Firstly, there are ethical concerns with providing users, unknowingly, with bias content. It becomes harder for people to form more well-rounded opinions, the result of well-balanced information sources. In Pariser’s words,

这有两个主要问题。 首先,在不知不觉中为用户提供偏见内容存在道德问题。 信息来源平衡的结果是,人们难以形成更加全面的意见。 用巴黎人的话,

“The danger of these filters is that you think you are getting a representative view of the world and you are really, really not, and you don’t know it.”

“这些过滤器的危险在于,您认为您正在获得代表世界的视图,而实际上却不是,而且您也不知道。”

The second issue is the holistic effectiveness of recommender systems. I like whisky, so when I look at any of my social media streams, it is full of online whisky sellers. Will I then go on to buy whisky? Yes.

第二个问题是推荐系统的整体有效性。 我喜欢威士忌酒,因此当我浏览任何社交媒体流时,都会发现有很多在线威士忌酒销售商。 然后我会继续购买威士忌吗? 是。

So why is this such a bad thing?

那么为什么这是一件坏事呢?

Well, because I like whisky, and even without the adverts, I will search around the internet for interesting bottles and likely buy some anyway.

好吧,因为我喜欢威士忌,即使没有广告,我也会在互联网上搜索有趣的瓶子,并且仍然可能会购买一些。

Both of these concerns, ethical and effectiveness, can be addressed by introducing an element of variation to recommended items. Perhaps a Republican response to an article, or a bottle of gin that whisky lovers tend to like.

可以通过在推荐项目中引入变化元素来解决道德和有效性这两个问题。 也许是共和党人对某篇文章的回应,或者一瓶威士忌爱好者喜欢的杜松子酒。

How can this be done? A simple method is to include penalty terms into algorithms for similarity. This addresses the ethical concern quite well.

如何才能做到这一点? 一种简单的方法是将惩罚项包括在算法中以实现相似性。 这很好地解决了道德问题。

However, to improve the model effectiveness a change of perspective when training may help. Instead of measuring model performance on how many recommended items are interacted with, try measuring how many more items are being purchased than would have without the recommendations.

但是,为了提高模型的有效性,培训时改变观点可能会有所帮助。 与其评估与多少个推荐商品进行交互的模型性能,不如评估购买的商品比没有推荐商品的数量多。

Without an understanding of confirmation bias, data scientists would be unlikely to realize that the filter bubble phenomenon is occurring, let alone know how to try and mitigate it.

如果不了解确认偏差,数据科学家将不太可能意识到正在发生过滤器气泡现象,更不用说知道如何尝试并减轻它了。

可用性偏差 (Availability Bias)

Availability bias occurs when people rely on the information that is most readily available to them, generally more recent information.

当人们依赖于他们最容易获得的信息(通常是较新的信息)时,就会发生可用性偏差。

In an experiment, Tversky and Kahneman showed participants either a list of 19 famous men and 20 less famous women or 19 famous women and 20 less famous men. The participants were generally able to recall more of the famous gender than less famous gender and estimated that the list of the famous gender was longer than the less famous gender.

在一项实验中 ,Tversky和Kahneman向参与者显示了19名成名男子和20名少名女子或19名成名女子和20名少名男子的列表。 参与者通常能够回忆起著名的性别多于不那么著名的性别,并估计著名性别的列表比不那么著名的性别更长。

Kahneman and Tversky argued that this was caused by availability bias. Despite being a poor heuristic for judging probability, participants used the number of celebrities more readily available to them as an estimate for the total number. As they were likely to recall more famous celebrities, they estimated that it was the longer list.

Kahneman和Tversky认为这是由于可用性偏差引起的。 尽管判断可能性的启发式方法较差,但参与者还是使用他们更容易获得的名人数量作为总数的估算值。 由于他们可能会想起更多知名名人,因此他们估计名单更长。

When training machine learning models, availability bias can often cause data bias. If only the most readily available data is used to train the model, it may contain inherent bias.

在训练机器学习模型时,可用性偏差通常会导致数据偏差。 如果仅使用最容易获得的数据来训练模型,则该模型可能包含固有偏差。

A well-known example of this is gender bias in machine translation. This can occur when translating from gendered languages to gender-neutral languages. For example, “he” and “she” in English both translate to the ungendered pronoun “o” in Turkish.

机器翻译中的性别偏见就是一个众所周知的例子。 从性别语言转换为性别中立语言时,可能会发生这种情况。 例如,英语中的“ he”和“她”都翻译成土耳其语中未修饰的代词“ o”。

Photo by Markus Winkler on Unsplash
Markus Winkler在Unsplash上拍摄的照片

People started noticing that this created gender biases in Google translate, such as translating “o bir doktor” to “he is a doctor” and “o bir hemşire” to “she is a nurse.” This was because of an inherent bias in the training data, as historically more men were doctors and more women were nurses.

人们开始注意到这在Google翻译中造成了性别偏见,例如将“ o bir doktor”翻译为“他是医生”,将“ o birhemşire”翻译为“她是护士”。 这是由于培训数据存在固有偏差,因为从历史上看,越来越多的男性是医生,而更多的女性是护士。

This is a consequence of availability bias, where data scientists took the data available to them without considering whether it would create the most effective model.

这是可用性偏差的结果,在这种情况下,数据科学家将数据提供给他们,而不考虑是否会创建最有效的模型。

In order to mitigate this bias, data scientists need to change their thinking from “How can I make my model using the data I have?” to:

为了减轻这种偏见,数据科学家需要将思维方式从“如何使用已有的数据建立模型?”转变为思维方式。 至:

“What data do I need to create my model?”

“创建模型需要什么数据?”

In the example above, Google’s solution was to create a new dataset containing queries labelled as either male, female or gender-neutral, which they used to train their model. By thinking outside of what was immediately available to them, they were able to create a much more effective machine learning model.

在上面的示例中,Google的解决方案是创建一个新的数据集,其中包含标记为男性,女性或不分性别的查询,他们使用它们来训练模型。 通过在他们可以立即获得的东西之外进行思考,他们能够创建一个更有效的机器学习模型。

生存偏见 (Survivorship Bias)

In 1943, during World War II, the US military studied the damage to planes and decided to reinforce the areas that were most commonly damaged, in order to reduce bomber losses.

1943年,在第二次世界大战期间,美国军方对飞机的损坏进行了研究,并决定加强最常见的损坏区域,以减少轰炸机的损失。

However, statistician Abraham Wald realized that the areas that were most hit were the least vulnerable, as those planes were able to return to the base; instead, the areas with the least evidence damage should be reinforced as the lack of damage indicated that the planed damaged in these areas went on to crash.

然而,统计学家亚伯拉罕·瓦尔德(Abraham Wald) 意识到 ,受灾最严重的地区是最不易受伤害的地区,因为这些飞机能够返回基地。 取而代之的是,应加强证据最少的区域的加固,因为缺乏破坏表明这些区域的计划受损继续崩溃。

Survivorship bias is a type of availability bias, but instead of focusing on the most readily available information, humans focus on the most visible information, typically because it has passed through some selection process.

生存偏见是一种可用性偏见,但是人们不是专注于最容易获得的信息,而是专注于最可见的信息,这通常是因为它经过了某些选择过程。

Photo by Ian Cumming on Unsplash
Ian Cumming在Unsplash上拍摄的照片

The above story teaches a very important lesson about the limitations of data science. Sometimes all the available data is still not enough to create a good model. What is not available might be just as important.

上面的故事教了一个关于数据科学局限性的非常重要的课程。 有时,所有可用数据仍然不足以创建一个好的模型。 不可用的内容可能同样重要。

Unfortunately, this often means that machine learning models can often go into production before the data scientists who developed it realize that they aren’t working.

不幸的是,这通常意味着机器学习模型通常可以在开发它的数据科学家意识到它们无法正常工作之前投入生产。

Understanding this limitation is extremely important in avoiding wasted time and money. A simple solution is to include domain experts in the machine learning development and data collection processes. These experts will be able to spot domain-specific issues that cannot be seen from the data without additional context.

理解此限制对于避免浪费时间和金钱非常重要。 一个简单的解决方案是让领域专家参与机器学习开发和数据收集过程。 这些专家将能够发现在没有其他上下文的情况下无法从数据中看到的特定领域问题。

锚定 (Anchoring)

Anchoring occurs when a person relies too heavily on a piece of information they have already received. All future decisions and judgments are then made using this piece of information as an “anchor”, even though it may be irrelevant.

当一个人过于依赖他们已经收到的一条信息时,就会发生锚定 。 然后,即使这些信息可能无关紧要,也将这些信息用作“锚点”来做出所有未来的决定和判断。

In a 1974 article published in Science, Kahneman and Tversky describe an experiment where they spun a rigged wheel of fortune in front of participants. The wheel would either land on 10 or 65. The participants were then asked to estimate the total number of African countries in the United Nations.

Kahneman和Tversky在1974年发表在《科学》杂志上的一篇文章中,描述了一个实验,他们在参与者面前旋转了一个被操纵的命运之轮。 轮子将落在10号或65号上。然后要求与会者估计联合国中非洲国家的总数。

The group that saw the wheel land on 10 estimated, on average, 25 countries. On the other hand, those that saw the wheel land on 65 guessed, on average, 45 countries. This is despite the fact that the participants thought the wheel was completely random.

看到这个轮子的人平均估计有25个国家在10个国家。 另一方面,看到轮子落地的人平均估计有45个国家/地区。 尽管参与者认为轮子是完全随机的,但事实并非如此。

Once humans have been provided with an anchor, they use it as the starting point for any decision. In the above experiments, those who saw 10 on the wheel subconsciously used 10 as the starting point for the number of African countries in the UN. They would then increase the number until they were comfortable with their estimate.

为人类提供锚点后,他们将其用作任何决定的起点。 在上述实验中,那些在转盘上看到10的人下意识地将10作为联合国非洲国家数量的起点。 然后,他们将增加数量,直到对估计满意为止。

Considering that most people are not one-hundred percent certain about every decision they make, there is a window of uncertainty, which means that if you approach the estimate from two different directions, you can get wildly different estimates, on either side of the window.

考虑到大多数人对他们所做的每个决定都不百分之一百确定,因此存在不确定性的窗口,这意味着,如果您从两个不同的方向进行估算,则可以在窗口的任一侧获得截然不同的估算值。

Photo by Grant Durr on Unsplash
格randint·杜尔 ( Grant Durr)在Unsplash上摄

Anchoring can be a particularly important consideration when creating training datasets for machine learning models. These datasets are often created by tasking humans with manually labelling the data using their own judgment. As, in many cases of machine learning, the aim is to reproduce human decision making, this is often the most accurate, if not the only, way to create a dataset of the “ground truth.”

在为机器学习模型创建训练数据集时,锚定可能是特别重要的考虑因素。 这些数据集通常是由任务人员根据自己的判断手动标记数据而创建的。 因为在许多机器学习中,目标都是重现人类的决策,所以这通常是创建“地面事实”数据集的最准确(即使不是唯一)方法。

This can be quite straightforward if you are labelling whether images are cats or dogs. But imagine you have asked a group of real estate experts to estimate the price of houses. If the first house you show them is a multi-million dollar mansion, the subsequent estimates are likely to be much higher than if you were to start with a run-down bungalow.

如果要标记图像是猫还是狗,这可能非常简单。 但是,假设您已邀请一组房地产专家来估算房屋价格。 如果您向他们展示的第一所房子是价值数百万美元的豪宅,那么随后的估算可能会比您从破败的平房开始时要高得多。

The results of this could be a machine learning model that consistently over or underestimates the price of houses, not because the model performs badly, but because the data is biased. In fact, it is likely that a data scientist wouldn’t spot the poor performance, as the validation dataset would have been labeled in the same way, so would contain the same bias.

其结果可能是一个机器学习模型,该模型持续地高估或低估了房屋的价格,这不是因为该模型的性能不佳,而是因为数据存在偏差。 实际上,数据科学家可能不会发现性能不佳的情况,因为验证数据集将以相同的方式标记,因此包含相同的偏差。

There are several ways to mitigate against anchoring. The first is to deliberately show participants specific initial data points. This could be a series of houses that have been judged to be mid-range. Alternatively, it could be a set of example, three houses with low, medium and high price tags, along with those price tags.

有几种减轻锚定的方法。 第一个是故意向参与者显示特定的初始数据点。 这可能是一系列被判断为中档的房屋。 或者,它可以是一组示例,三个带有低,中和高价格标签的房屋,以及那些价格标签。

In both of these cases, anchoring is not being mitigated against but is being deliberately set to avoid bias. On the other hand, to mitigate against anchoring each data point can be labelled by multiple participants, with the average taken as the final label. Each participant would receive a random selection of data points, in a random order so that an average would counteract any individual biases.

在这两种情况下,锚固都不会减轻,但会故意设置以避免偏差。 另一方面,为了减轻锚定,每个数据点可以由多个参与者标记,将平均值作为最终标记。 每个参与者将以随机顺序接收随机选择的数据点,以便平均值可以抵消任何单个偏差。

最后的想法 (Final thoughts)

Cognitive bias is an unavoidable phenomenon in human decision making. However, research over the past half-decade has shown us that these irrational decisions are predictable, and this predictability can be used to mitigate against them.

认知偏见是人类决策中不可避免的现象。 但是,过去五年来的研究表明,这些非理性的决定是可以预测的,并且这种可预测性可以用来缓解这些决定。

Although machine learning models in and of themselves cannot have cognitive biases, they can have biases as a result of cognitive bias as they are an interface for human decision making.

尽管机器学习模型本身不能具有认知偏差,但由于它们是人类决策的接口, 因此它们可能由于认知偏差而具有偏差。

Whether ensuring there is no bias contained within the data going in, or accounting for the bias of the humans that use the data going out, data scientists need to consider human decision making.

无论是确保输入的数据中没有偏差,还是要考虑使用数据输出的人员的偏差,数据科学家都需要考虑人为决策。

Without these considerations, we have seen how models can be ineffective, or even wrong. And we may not even know that it’s happening.

没有这些考虑,我们已经看到了模型如何无效甚至错误。 而且我们甚至可能不知道它正在发生。

翻译自: https://medium.com/swlh/why-all-data-scientists-should-understand-behavioral-economics-1efbd8df2f71

行为经济学


http://www.taodudu.cc/news/show-2467036.html

相关文章:

  • 刘韵洁:未来网络技术发展趋势与展望
  • 时间序列预测法
  • 经济学人精读丨中国的电子商务
  • 经济危机(一)
  • 《薛兆丰的经济学课》课程总结5--需要协调
  • 亚洲前沿科技展望:人工智能与区块链的融合发展
  • 国家统计局拟用大数据预测房价走势
  • 区块链金融的现状与展望
  • 数字经济是党和国家定下的重要发展战略
  • JAVA项目答辩题之参考_Java项目答辩
  • java论文致谢_JAVA语言课程设计论文致谢范文
  • 五子棋java毕业设计论文_Java五子棋毕业设计论文
  • java程序设计 论文,Java程序设计毕业论文
  • java论文word_java毕设论文参考文献.doc
  • java坦克大战论文_(毕业论文)Java版坦克大战.doc
  • java字节流——简单实现论文查重功能
  • java 参考期刊文章_计算机论文java参考文献_期刊[J]_学位论文[D]_专著[M]_(30)
  • java技术论文
  • Java 简单论文查重程序(SimHash+海明距离算法)
  • 熟悉java的写什么毕业设计_计算机专业Java相关的毕业论文该如何写?
  • 有哪些有关java类最新发表的毕业论文呢?
  • JAVA 毕业设计 论文题目参考
  • 基于Java实现的毕业设计论文选题系统
  • java课程结课论文_Java技术综合课程设计论文
  • java课程结课论文_Java结课论文.doc
  • java论文3000字_一篇文章让你真正了解Java(纯干货)
  • 什么是SLA?
  • 静态路由+sla
  • Oracle EBS SLA Custom Sources(自定义来源)
  • SLA服务保障初识

行为经济学_为什么所有数据科学家都应该了解行为经济学相关推荐

  1. 算命数据_未来的数据科学家或算命精神向导

    算命数据 Real Estate Sale Prices, Regression, and Classification: Data Science is the Future of Fortune ...

  2. 专访 | 微软首席数据科学家谢梁:从经济学博士到爬坑机器学习,这十年我都经历了啥?

    谢梁,美国微软总部首席数据科学家,本科毕业于西南财经大学经济学专业,然后在中国工商银行从事信贷评估工作,一年后辞职到纽约州立大学学习应用计量经济学.研究兴趣主要是混合模型(mixed model)和数 ...

  3. pd种知道每个数据的类型_每个数据科学家都应该知道的5个概念

    pd种知道每个数据的类型 意见 (Opinion) 目录 (Table of Contents) Introduction介绍 Multicollinearity多重共线性 One-Hot Encod ...

  4. python内置函数多少个_每个数据科学家都应该知道的10个Python内置函数

    python内置函数多少个 Python is the number one choice of programming language for many data scientists and a ...

  5. vue取数据第一个数据_我作为数据科学家的第一个月

    vue取数据第一个数据 A lot. 很多. I landed my first job as a Data Scientist at the beginning of August, and lik ...

  6. 数据结构堆栈 内存堆栈_零堆栈数据科学家第二部分秋天

    数据结构堆栈 内存堆栈 In Hollywood, it is known that the sequels are rarely better than the original movie/par ...

  7. 【数据科学家】每个数据科学家都应该学习4个必备技能

    摘要: 作为一个数据科学家你必须要掌握的四个必备技能,值得每个想要成为数据科学家和已经成为数据科学家的人去学习. 这篇文章对应之前发表过的一篇关于如何成长为一名具备其他技能的高级数据科学家的文章.希望 ...

  8. 数据更改后推送_合格的数据科学家,这些Github知识必须了解

    全文共2270字,预计学习时长5分钟 图片来源:Unsplash/HackCapital摄 版本控制经验已逐渐成为所有数据科学家的必要能力.版本控制可以帮助数据科学家更好地做团队工作.促进项目协作.共 ...

  9. 【深度学习】每个数据科学家都必须了解的 6 种神经网络类型

    神经网络是强大的深度学习模型,能够在几秒钟内合成大量数据.有许多不同类型的神经网络,它们帮助我们完成各种日常任务,从推荐电影或音乐到帮助我们在线购物. 与飞机受到鸟类启发的方式类似,神经网络(NNs) ...

  10. 德国公民信用相关数据_作为公民数据科学家,没有任何事

    德国公民信用相关数据 数据科学,意见(Data Science, Opinion) Dear Aspiring Data Scientist, 亲爱的有抱负的数据科学家, Before you sta ...

最新文章

  1. 【暑假训练 7.10】 codevs 2492 上帝造题的七分钟2
  2. Xshell 的基本使用
  3. Python高能小技巧:了解bytes与str的区别
  4. 阿里二面:怎么解决MySQL死锁问题的?
  5. springboot实现条形码_Springboot转发重定向实现方式解析
  6. 如何把MySql数据库中的数据导入到MyCat集群中_---Linux运维工作笔记050
  7. Jenkins 文档特别兴趣小组
  8. grep命令_「Linux」- ps -ef |grep 命令
  9. 循环神经网络系列(一) RNN、双向RNN、深度RNN
  10. ArcGIS中提供的北京54与wgs84坐标转换方法及参数
  11. 资源分配博弈之纳什均衡和斯塔克尔伯格模型
  12. foxmail添加网易企业邮箱账号遇到账号或密码错误的问题
  13. 关于程序组团队建设的几点想法
  14. 软件架构师应具备的十大特点
  15. DWT文件怎么转换成html,dwg和dwt文件有什么区别?DWT又可以转换什么格式?-迅捷CAD转换器...
  16. PVE系列教程(十五)、安装Windows10系统(专业版、企业版、家庭版通用)
  17. 免费获取对方ip地址PHP源码
  18. 安防IT化如何把握其中的共性与个性
  19. Cutting Sticks
  20. 如何让电脑的多个蓝牙音响同时输出声音

热门文章

  1. c语言薛定谔方程,如何解薛定谔方程?-- k · p method
  2. linux 下 packet_mmap 前篇 (抓包实现)
  3. html颜色怎么渐变效果,html怎么设置颜色渐变
  4. vue给标签动态添加元素_vue中用v-html加载html元素及三种方法给v-html元素添加样式(详解)...
  5. php获取验证码倒数60秒,yii框架实现注册页面短信验证60秒倒计时
  6. 用计算机打字教案,使用打字软件练指法教案
  7. WIN10系统开机一个WIFI都找不到,网络适配器里没有WLAN驱动,连接不了网络问题【耗时3天测试10多种方法】
  8. 在线html5编辑器uedit,ueditor集成秀米编辑器 - HTML - php中文网博客
  9. 六、数据库管理与维护
  10. 5G商用牌照发放,“以竞争促落地”,日常5G服务可期