机器学习 舆情监控

部署后还有生命吗? (Is there life after deployment?)

Congratulations! Your machine learning model is now live. Many models never make it that far. Some claim, as much as 87% are never deployed. Given how hard it is to get from a concept to a working application, the celebration is well deserved.

恭喜你! 您的机器学习模型现已上线。 许多模型从未做到这一点。 有人声称 ,多达87%的人从未部署过。 考虑到从一个概念到一个可行的应用程序有多么艰辛,庆祝活动是当之无愧的。

这可能感觉像是最后一步。 (It might feel like a final step.)

Indeed, even the design of machine learning courses and the landscape of machine learning tools add to this perception. They extensively address data preparation, iterative model building, and (most recently) the deployment phase. Still, both in tutorials and practice, what happens after the model goes into production is often left up to chance.

确实,甚至机器学习课程的设计和机器学习工具的领域也增加了这种认识。 他们广泛地解决了数据准备,迭代模型构建和(最近)部署阶段的问题。 尽管如此,无论是在教程还是实践中,模型投入生产后发生的事情通常都是由偶然决定的。

Image by author.
图片由作者提供。

忽略的简单原因是缺乏成熟度。 (The simple reason for this neglect is a lack of maturity.)

Aside from a few technical giants that live and breathe machine learning, most industries are only starting up. There is limited experience with real-life machine learning applications. Companies are overwhelmed with sorting many things out for the first time and rushing to deploy. Data scientists do everything from data cleaning to the A/B test setup. Model operations, maintenance, and support are often only an afterthought.

除了生活和呼吸机器学习的一些技术巨头外,大多数行业还只是起步阶段。 现实生活中的机器学习应用程序的经验有限。 公司不满足于第一次整理许多东西并急于部署。 数据科学家可以完成从数据清理到A / B测试设置的所有工作。 模型的操作,维护和支持通常只是事后的想法。

One of the critical, but often overlooked components of this machine learning afterlife is monitoring.

监控是机器学习来世的关键但经常被忽视的组成部分之一。

为什么监控很重要 (Why monitoring matters)

An ounce of prevention is worth a pound of cure — Benjamin Franklin

一盎司的预防胜于一磅的治疗—本杰明·富兰克林

With the learning techniques we use these days, a model is never final. In training, it studies the past examples. Once released into the wild, it works with new data: this can be user clickstream, product sales, or credit applications. With time, this data deviates from what the model has seen in training. Sooner or later, even the most accurate and carefully tested solution starts to degrade.

借助我们如今使用的学习技术,模型永远不会最终确定。 在培训中,它研究了过去的例子。 一旦发布,它就可以处理新数据:可以是用户点击流,产品销售或信用申请。 随着时间的流逝,这些数据会偏离模型在训练中看到的数据。 迟早,即使是最准确,经过仔细测试的解决方案也开始退化。

最近的大流行生动地说明了这一点。 (The recent pandemic illustrated this all too vividly.)

Some cases even made the headlines:

一些案例甚至成为头条新闻:

  • Instacart’s model’s accuracy predicting item availability at stores dropped from 93% to 61% due to a drastic shift in shopping habits.

    Instacart模型的预测准确度项目可供在商店下跌 93%至61%,因购物习惯发生巨大的转变。

  • Bankers question whether credit models trained on good times can adapt to the stress scenarios.

    银行家们质疑,在良好时期接受培训的信贷模型是否可以适应压力情景。

  • Trading algorithms misfired in response to market volatility. Some funds had a 21% fall.

    交易算法因市场波动而失败 。 一些基金下跌了21%。

  • Image classification models had to learn the new normal: a family at home in front of laptops can now mean “work,” not “leisure.”

    图像分类模型必须学习新常态:现在,在家中坐在笔记本电脑前的家庭可能意味着“工作”,而不是“休闲”。

  • Even weather forecasts are less accurate since valuable data disappeared with the reduction of commercial flights.

    甚至天气预报也不太准确,因为有价值的数据随着商业航班的减少而消失了。

A new concept of “office work” your image classification model might need to learn in 2020.(Image by Ketut Subiyanto on Pexels)
图像分类模型的“办公”概念可能需要在2020年学习。 (图片由 Ketut Subiyanto Pexels 发布 )

最重要的是,实时数据会发生各种问题。 (On top of this, all sorts of issues occur with live data.)

There are input errors and database outages. Data pipelines break. User demographic changes. If a model receives wrong or unusual input, it will make an unreliable prediction. Or many, many of those.

存在输入错误和数据库中断。 数据管道中断。 用户人口统计变化。 如果模型收到错误或异常输入,则将做出不可靠的预测。 或很多很多。

模型故障和未处理的衰减会导致损坏。 (Model failures and untreated decay cause damage.)

Sometimes this is just a minor inconvenience, like a silly product recommendation or wrongly labeled photo. The effects go much further in high-stake domains, such as hiring, grading, or credit decisions. Even in otherwise “low-risk” areas like marketing or supply chain, underperforming models can severely hit the bottom line when they operate at scale. Companies waste money in the wrong advertising channel, display incorrect prices, understock items, or harm the user experience.

有时,这只是个小麻烦,例如产品推荐不当或照片标签错误。 在高风险领域,例如聘用,评级或信贷决策,其影响要大得多。 即使在营销或供应链等其他“低风险”领域中,表现欠佳的模型也可以在大规模运营时严重触底。 公司会在错误的广告渠道上浪费金钱,显示错误的价格,库存不足的物品或损害用户体验。

监控到了。 (Here comes monitoring.)

We don’t just deploy our models once. We already know that they will break and degrade. To operate them successfully, we need a real-time view of their performance. Do they work as expected? What is causing the change? Is it time to intervene?

我们不只是部署模型一次。 我们已经知道它们会破裂并退化。 为了成功操作它们,我们需要实时了解它们的性能。 它们是否按预期工作? 是什么引起了变化? 现在该进行干预了吗?

This sort of visibility is not a nice-to-have, but a critical part of the loop. Monitoring bakes into the model development lifecycle, connecting production with modeling. If we detect a quality drop, we can trigger retraining or step back into the research phase to issue a model remake.

这种可见性不是很好,而是循环的关键部分。 监控进入了模型开发生命周期,将生产与建模联系在一起。 如果我们发现质量下降,则可以触发再培训或退回到研究阶段以发布模型重制。

Image by author.
图片由作者提供。

让我们提出一个正式的定义: (Let us propose a formal definition:)

Machine learning monitoring is a practice of tracking and analyzing production model performance to ensure acceptable quality as defined by the use case. It provides early warnings on performance issues and helps diagnose their root cause to debug and resolve.

机器学习监视 是一种跟踪和分析生产模型性能以确保用例定义的可接受质量的实践。 它提供有关性能问题的早期警告,并帮助诊断其根本原因以进行调试和解决。

机器学习监控有何不同 (How machine learning monitoring is different)

One might think: we have been deploying software for ages, and monitoring is nothing new. Just do the same with your machine learning stuff. Why all the fuss?

有人可能会认为:我们已经部署软件已有很长时间了,监视并不是什么新鲜事物。 只需对您的机器学习资料进行相同的处理即可。 为什么大惊小怪?

There is some truth to it. A deployed model is a software service, and we need to track the usual health metrics such as latency, memory utilization, and uptime. But in addition to that, a machine learning system has its unique issues to look after.

这有一些道理。 部署的模型是一种软件服务,我们需要跟踪常规的运行状况指标,例如延迟,内存利用率和正常运行时间。 但是除此之外,机器学习系统还需要解决其独特的问题。

Image by author.
图片由作者提供。

首先,数据增加了一层额外的复杂性。 (First of all, data adds an extra layer of complexity.)

It is not just the code we should worry about, but also data quality and its dependencies. More moving pieces — more potential failure modes! Often, these data sources reside completely out of our control. And even if the pipelines are perfectly maintained, the environmental change creeps in and leads to a performance drop.

这不仅是我们应该担心的代码,而且还包括数据质量及其依赖性。 移动更多的零件-更多潜在的故障模式! 通常,这些数据源完全不受我们的控制。 即使管道得到了很好的维护,环境变化也会蔓延并导致性能下降。

Is the world changing too fast? In machine learning monitoring, this abstract question becomes applied. We watch out for data shifts and casually quantify the degree of change. Quite a different task from, say, checking a server load.

世界变化太快了吗? 在机器学习监控中,这个抽象的问题得到了应用。 我们会注意数据变化并随意量化变化程度。 与检查服务器负载完全不同。

更糟的是,模型通常会静默失败。 (To make things worse, models often fail silently.)

There are no “bad gateways” or “404”s. Despite the input data being odd, the system will likely return the response. The individual prediction might seemingly make sense — while being harmful, biased, or wrong.

没有“错误的网关”或“ 404”。 尽管输入数据奇数,系统仍可能返回响应。 个体的预测看似有意义,尽管是有害的,有偏见的或错误的。

Imagine, we rely on machine learning to predict customer churn, and the model fell short. It might take weeks to learn the facts (such as whether an at-risk client eventually left) or notice the impact on the business KPI (such as a drop in quarterly renewals). Only then, we would suspect the system needs a health check! You’d hardly miss a software outage for that long. In the land of unmonitored models, this invisible downtime is an alarming norm.

想象一下,我们依靠机器学习来预测客户流失率,而该模型就不够了。 了解事实(例如,高风险的客户是否最终离开)或注意到对业务KPI的影响(例如,季度续签的减少)可能需要数周的时间。 只有这样,我们才会怀疑系统需要健康检查! 您几乎不会错过这么长时间的软件中断。 在不受监控的模型领域,这种无形的停机时间是一个令人震惊的准则。

To save the day, you have to react early. This means assessing just the data that went in and how the model responded: a peculiar type of half-blind monitoring.

为了节省时间,您必须及早做出React。 这意味着仅评估输入的数据以及模型如何响应:一种特殊的半盲监视。

Image by author.
图片由作者提供。

“好”与“坏”之间的区别并不明确。 (The distinction between “good” and “bad” is not clear-cut.)

One accidental outlier does not mean the model went rogue and needs an urgent update. At the same time, stable accuracy can also be misleading. Hiding behind an aggregate number, a model can quietly fail on some critical data region.

一个意外的异常值并不意味着该模型已变得无赖,需要紧急更新。 同时,稳定的精度也会引起误解。 隐藏在总数之后,模型可以在某些关键数据区域悄然失败。

Image by author.
图片由作者提供。

没有上下文,指标就毫无用处。 (Metrics are useless without context.)

Acceptable performance, model risks, and costs of errors vary across use cases. In lending models, we care about fair outcomes. In fraud detection, we barely tolerate false negatives. With stock replenishment, ordering more might be better than less. In marketing models, we would want to keep tabs on the premium segment performance.

可接受的性能,模型风险和错误成本因使用案例而异。 在贷款模型中,我们关心公平的结果。 在欺诈检测中,我们几乎不能容忍假阴性。 随着库存的补充,订购更多可能总比减少好。 在营销模型中,我们希望密切关注高端细分市场的表现。

All these nuances inform our monitoring needs, specific metrics to keep an eye on, and the way we’ll interpret them.

所有这些细微差别都会反映出我们的监控需求,需要关注的特定指标以及我们对其进行解释的方式。

With this, machine learning monitoring falls somewhere in between traditional software and product analytics. We still look at “technical” performance metrics — accuracy, mean absolute error, and so on. But what we primarily aim to check is the quality of the decision-making that machine learning enables: whether it is satisfactory, unbiased, and serves our business goal.

因此,机器学习监控介于传统软件和产品分析之间。 我们仍然关注“技术”性能指标-准确性,平均绝对误差等。 但是,我们主要要检查的是机器学习可以实现的决策质量:它是否令人满意,公正无私,是否符合我们的业务目标。

简而言之 (In a nutshell)

Looking only at software metrics is too little. Looking at the downstream product or business KPIs is too late. Machine learning monitoring is a distinct domain, and it requires appropriate practices, strategies, and tools.

只看软件指标太少了。 查看下游产品或业务KPI为时已晚。 机器学习监控是一个独特的领域,它需要适当的实践,策略和工具。

谁应该关心机器学习监控? (Who should care about machine learning monitoring?)

The short answer: everyone who cares about the model’s impact on business.

简短的答案:每个关心模型对业务影响的人。

Of course, data scientists are on the front line. But once the model leaves the lab, it becomes a part of the company’s products or processes. Now, this is not just some technical artifact but an actual service with its users and stakeholders.

当然,数据科学家在前线。 但是一旦模型离开实验室,它便成为公司产品或流程的一部分。 现在,这不仅是一些技术工件,而且是其用户和利益相关者的实际服务。

The model can present outputs to external customers, such as a recommendation system on an e-commerce site. Or it can be a purely internal tool, such as sales forecasting models for your demand planners. In any case, there is a business owner — a product manager or a line-of-business team — that relies on it to deliver results. And a handful of others concerned, with roles spanning from data engineers to support.

该模型可以将输出呈现给外部客户,例如电子商务站点上的推荐系统。 或者它可以是纯粹的内部工具,例如供需求计划者使用的销售预测模型。 无论如何,都有一个企业主-产品经理或业务团队-依靠它来提供结果。 以及其他一些相关人员,其角色从数据工程师到支持人员不等。

数据团队和业务团队都需要跟踪和解释模型行为。 (Both data and business teams need to track and interpret model behavior.)

Image by author.
图片由作者提供。

For the data team, this is about efficiency and impact. You want your models to make the right call, and business to adopt them. You also want the maintenance to be hassle-free. With adequate monitoring, you quickly detect, resolve, prevent incidents, and refresh the model as needed. Observability tools help keep the house in order and save you time to build new things.

对于数据团队而言,这与效率和影响有关。 您希望模型做出正确的选择,并希望企业采用它们。 您还希望维护无忧。 通过适当的监视,您可以快速检测,解决,预防事件并根据需要刷新模型。 可观察性工具有助于使房屋井然有序,并节省您建造新事物的时间。

For business and domain experts, it ultimately boils down to trust. When you act on model predictions, you need a reason to believe they are right. You might want to explore specific outcomes or get a general sense of the model’s weak spots. You also need clarity on the ongoing model value and peace of mind that risks are under control.

对于企业和领域专家而言,它最终归结为信任。 当您对模型预测采取行动时,您需要有理由相信它们是正确的。 您可能想探索特定的结果或对模型的弱点有一个大致的了解。 您还需要清晰地了解正在进行的模型价值,并且可以放心地控制风险。

If you operate in healthcare, insurance, or finance, this supervision gets formal. Compliance will scrutinize the models for bias and vulnerabilities. And since models are dynamic, it is not a one-and-done sort of test. You have to continuously run checks on the live data to see how each model keeps up.

如果您从事医疗保健,保险或金融业务,则此监管会正式进行。 合规性将仔细检查模型的偏见和漏洞。 而且由于模型是动态的,因此它不是一种一劳永逸的测试。 您必须不断地对实时数据进行检查,以查看每种模型如何跟进。

我们需要生产模型的完整视图。 (We need a complete view of the production model.)

Proper monitoring can provide this and serve each party the right metrics and visualizations.

适当的监视可以提供此服务,并为各方提供正确的指标和可视化效果。

Image by author.
图片由作者提供。

Let’s face it. Enterprise adoption can be a struggle. And it often only starts after model deployment. There are reasons for that.

面对现实吧。 企业采用可能是一场斗争。 它通常仅在模型部署后开始。 这是有原因的。

In an ideal world, you can translate all your business objectives into an optimization problem and reach the model accuracy that makes human intervention obsolete.

在理想的世界中,您可以将所有业务目标转化为优化问题,并达到避免人工干预的模型准确性。

In practice, you often get a hybrid system and a bunch of other criteria to deal with. These are stability, ethics, fairness, explainability, user experience, or performance on edge cases. You can’t simply blend them all in your error minimization goal. They need ongoing oversight.

在实践中,您通常会得到一个混合系统和许多其他条件要处理。 这些是稳定性,道德,公平,可解释性,用户体验或在极端情况下的性能。 您不能简单地将它们全部融合到最小化错误目标中。 他们需要持续的监督。

有用的模型是所使用的模型。 (A useful model is a model used.)

Fantastic sandbox accuracy makes no difference if the production system never makes it.

如果生产系统无法做到,那么出色的沙箱精度不会有任何不同。

Beyond “quick win” pilot projects, one has to make the value real. For that, you need transparency, stakeholder engagement, and the right collaboration tools.

除了“快速获胜”试点项目之外,还必须实现其价值。 为此,您需要透明性,利益相关者的参与以及正确的协作工具。

可见性会有所回报。 (The visibility pays back.)

This shared context improves adoption. It also helps when things go off track.

这种共享的上下文提高了采用率。 当事情偏离轨道时,它也有帮助。

Suppose a model returns a “weird” response. These are the domain experts who help you define if you can or can’t dismiss it. Or, your model fails on a specific population. Together you can brainstorm new features to address this.

假设模型返回“怪异”响应。 这些领域专家会帮助您定义您是否可以将其关闭。 或者,您的模型不适用于特定人群。 您可以一起头脑风暴新功能来解决此问题。

Want to dig into the emerging data drift? Adjust the classifier decision threshold? Figure out how to compensate for model flaws by tweaking product features?

是否想挖掘新兴的数据漂移? 调整分类器决策阈值? 找出如何通过调整产品功能来弥补模型缺陷?

所有这些都需要协作。 (All this requires collaboration.)

Such engagement is only possible when the whole team has access to relevant insights. A model should not be an obscure black-box system. Instead, you treat it as a machine learning product that one can audit and supervise in action.

只有整个团队都能获得相关见解时,这种参与才有可能。 模型不应是晦涩的黑匣子系统。 取而代之的是,您将其视为一种机器学习产品,可以对其进行审核和监督。

When done right, model monitoring is more than just technical bug-tracking. It serves the needs of many teams and helps them collaborate on model support and risk mitigation.

正确完成后,模型监视不仅是技术缺陷跟踪。 它可以满足许多团队的需求,并可以帮助他们在模型支持和降低风险方面进行协作。

监控差距 (The monitoring gap)

In reality, there is a painful mismatch. Research shows that companies monitor only one-third of their models. As for the rest? We seem to be in the dark.

实际上,这是一个痛苦的不匹配。 研究表明 ,公司仅监视其模型的三分之一。 至于其余的呢? 我们似乎在黑暗中。

这就是故事经常发生的方式。 (This is how the story often unfolds.)

At first, a data scientist baby-sits the model. Immediately after deployment, one often needs to collect the feedback and iterate on details, which keeps you occupied. Then, the model is deemed fully operational, and its creator leaves for a new project. The monitoring duty is left hanging in the air.

最初,一位数据科学家对模型进行婴儿护理。 部署后,经常需要立即收集反馈并遍历详细信息,这使您无法自拔。 然后,该模型被认为是完全可操作的,其创建者将前往一个新项目。 监视职责悬而未决。

Some teams would routinely revisit the models for a basic health check and miss anything that happens in between. Others only discover issues from their users and then rush to put out a fire.

一些团队会例行地重新访问模型以进行基本的健康检查,而错过之间的任何情况。 其他人只是从用户那里发现问题,然后急着扑灭大火。

解决方案是定制的和部分的。 (The solutions are custom and partial.)

For the most important models, you might find a dedicated home-grown dashboard. Often they become a Frankenstein of custom checks based on each consecutive failure the team encounters. To paint a full picture, each model monitor would also have a custom interface while business KPIs live in separate siloed reports.

对于最重要的模型,您可能会找到专用的本地仪表板。 通常,他们会根据团队遇到的每一次连续失败而成为自定义检查的科学怪人。 为了绘制完整的图片,每个模型监视器还将具有一个自定义界面,而业务KPI则驻留在单独的独立报告中。

If someone on a business team asks for a deeper model insight, this would mean custom scripts and time-consuming analytical work. Or often, the request is simply written off.

如果业务团队中的某人需要更深入的模型洞察力,这将意味着自定义脚本和费时的分析工作。 或者通常只是简单地注销请求。

It is hard to imagine critical software that relies on spot-checking and manual review. But these disjointed, piecemeal solutions are surprisingly common in the modern data science world.

很难想象关键软件依赖于抽查和手动审核。 但是这些脱节的,零散的解决方案在现代数据科学世界中非常普遍。

为什么会这样呢? (Why is it so?)

One reason is the lack of clear responsibility for the deployed models. In a traditional enterprise setting, you have a DevOps team that takes care of any new software. With machine learning, this is a grey zone.

原因之一是对所部署的模型缺乏明确的责任。 在传统的企业环境中,您有一个DevOps团队负责所有新软件的维护。 在机器学习中,这是一个灰色区域。

Sure, IT can watch over service health. But when the input data changes — whose turf is it? Some aspects concern data engineering, while others are closer to operations or product teams.

当然,IT可以监督服务的运行状况。 但是,当输入数据发生变化时,它是谁的地盘? 一些方面涉及数据工程,而其他方面则与运营或产品团队更近。

每个人的生意都不是任何人的生意。 (Everybody’s business is nobody’s business.)

The data science team usually takes up the monitoring burden. But they juggle way too many things already and rarely have incentives to put maintenance first.

数据科学团队通常承担监视负担。 但是他们已经在杂乱无章地处理了很多事情,很少有动机将维护放在首位。

In the end, we often drop the ball.

最后,我们经常丢球。

A day in the life of an enterprise data scientist. Image by author.
企业数据科学家生命中的一天。 图片由作者提供。

密切关注AI (Keep an eye on AI)

We should urgently address this gap with production-focused tools and practices.

我们应立即以针对生产的工具和实践来解决这一差距。

As applications grow in number, holistic model monitoring becomes critical. You can hand-hold one model, but not a dozen.

随着应用程序数量的增长,整体模型监视变得至关重要。 您可以手持一个模型,但不能一打。

It is also vital to keep the team accountable. We deploy machine learning to deliver business value — we need a way to show it clearly in production! As well as bring awareness to the costs of downtime and the importance of the support and the improvement work.

让团队负责也很重要。 我们部署机器学习来交付业务价值-我们需要一种在生产中清晰展示它的方法! 以及使人们意识到停机的成本以及支持和改进工作的重要性。

当然,数据科学过程到处都是混乱的 。 (Of course, the data science process is chaotic all over.)

We poorly log experiments. We mismanage deployments. Machine learning operations (aka MLOps) is a rising practice to tackle this mess step by step. And monitoring is, sort of, at the very end. Yet we’d argue that we should solve it early. Ideally, as soon as your first model gets shipped.

我们没有很好地记录实验。 我们管理不当。 机器学习操作(aka MLOps)是一种逐步解决这种混乱情况的新兴方法。 监视在某种程度上是最后的。 但是我们认为我们应该尽早解决它。 理想情况下,当您的第一个模型发货时。

When a senior leader asks you how the AI project is doing, you don’t want to take a day to respond. Neither be the last to know of the model failure.

当一位高级负责人问您AI项目的情况如何时,您不想花一天的时间做出回应。 也不是最后一个知道模型故障的人。

Seamless production, visible gains, and happy users are key to make a reputation for machine learning to scale. Unless in pure research, that is where we aim.

无缝的生产,可见的收获和用户满意是在机器学习规模化声誉中赢得声誉的关键。 除非进行纯粹的研究,否则这就是我们的目标。

加起来 (Summing up)

Monitoring might be boring but is essential to success.Do it well, and do it sooner.

监控可能很无聊,但对成功至关重要,请做好并尽快完成。

This blog was originally published at https://evidentlyai.com.

该博客最初发布在 https://evidentlyai.com上

Machine learning monitoring is exactly what we look to solve at Evidently AI. Want to get a sneak peek of our platform? Reach out to hello@evidentlyai.com, or sign up to get updates on the launch.

显然, 机器学习监控正是我们希望解决的 AI问题 想要窥视我们的平台吗? 请访问hello@evidentlyai.com,或 注册 以获取有关启动的更新。

翻译自: https://towardsdatascience.com/machine-learning-monitoring-what-it-is-and-what-we-are-missing-e644268023ba

机器学习 舆情监控


http://www.taodudu.cc/news/show-3865256.html

相关文章:

  • 【SRE笔记 2022.9.59.6 linux文件系统及软件安装命令】
  • Prometheus+Grafana+钉钉部署一个单机的MySQL监控告警系统
  • kubernetes-kubelet进程源码分析(二)
  • mysql主从同步(3)-percona-toolkit工具(数据一致性监测、延迟监控)使用梳理
  • Flume监控之Ganglia安装与简单实践
  • MySQL 主从同步percona-toolkit工具(数据一致性监测、延迟监控)使用梳理
  • PostgreSQL pgmetrics - 多版本、健康监控指标采集、报告
  • Day795.监测上下文切换异常的命令排查工具BlockingQueue -Java 性能调优实战
  • IIS 性能监测
  • Android 内存监测工具
  • 【Linux性能实时监测工具-Netdata】
  • [MySQL]-压力测试之性能监测指标
  • Unity3D Client And C# Server Framework
  • ET篇:前言
  • [1] ET框架初养成 爱之初体验
  • 高数:常用等价无穷小
  • 常用的等价无穷小
  • 等价无穷小公式
  • 高等函数:常用等价无穷小替换
  • 高等数学-常用等价无穷小的证明
  • 常用的等价无穷小代换
  • 常用等价无穷小及例题
  • 等价无穷小的替换条件
  • [高等数学] 速查——等价无穷小替换公式 及 无穷小比阶
  • 常用等价无穷小替换
  • 高数常用等价无穷小
  • 常见等价无穷小
  • 求极限中比较常见的等价无穷小的记忆
  • 【数学竞赛】极限—等价无穷小
  • 一些等价无穷小的证明

机器学习 舆情监控_机器学习监控它是什么以及我们缺少什么相关推荐

  1. 机器学习导论�_机器学习导论

    机器学习导论� Say you are practising basketball on your own and you are trying to shoot the ball into the ...

  2. 机器学习 凝聚态物理_机器学习遇到了凝聚的问题

    机器学习 凝聚态物理 为什么要机器学习? (Why machine learning?) Machine learning is one of today's most rapidly cutting ...

  3. 机器学习模型 非线性模型_机器学习:通过预测菲亚特500的价格来观察线性模型的工作原理...

    机器学习模型 非线性模型 Introduction 介绍 In this article, I'd like to speak about linear models by introducing y ...

  4. 2、小米监控_服务监控Open-Falcon环境准备

    服务监控Open-Falcon环境准备 更多干货 分布式实战(干货) spring cloud 实战(干货) mybatis 实战(干货) spring boot 实战(干货) React 入门实战( ...

  5. 机器学习偏差方差_机器学习101 —偏差方差难题

    机器学习偏差方差 Determining the performance of our model is one of the most crucial steps in the machine le ...

  6. 机器学习系列(4)_机器学习算法一览,应用建议与解决思路

    作者:寒小阳 时间:2016年1月. 出处:http://blog.csdn.net/han_xiaoyang/article/details/50469334 声明:版权所有,转载请联系作者并注明出 ...

  7. 机器学习系列(7)_机器学习路线图(附资料)

    作者:寒小阳&&龙心尘 时间:2016年2月. 出处:http://blog.csdn.net/han_xiaoyang/article/details/50759472 http:/ ...

  8. (转)机器学习系列(7)_机器学习路线图(附资料)

    作者:寒小阳&&龙心尘 时间:2016年2月. 出处:http://blog.csdn.net/han_xiaoyang/article/details/50759472 http:/ ...

  9. 机器学习与不确定性_机器学习求职中的不确定性

    机器学习与不确定性 In less than a year, I will be deemed worthy by my university of a Bachelors degree. In le ...

最新文章

  1. ORA-**,oracle 12c操作问题
  2. 完成动态根据类别动态填充区域颜色
  3. python指定进程断网_python通过scapy模块进行arp断网攻击
  4. ubuntu 在vm中如何上网及注意问题
  5. 测试面试题集-MySQL数据库灵魂拷问
  6. SQL高级查询——50句查询(含答案) ---参考别人的,感觉很好就记录下来留着自己看。...
  7. 移动web-双飞翼(圣杯)布局
  8. catgroup linux_Linux用户(user)和用户组(group)的日常管理与操作教程概述
  9. Java--文本文档编写Java代码
  10. 使用delphi开发人工智能程序(环境搭建)
  11. 达梦数据库的学习使用
  12. fanuc服务器显示6,1.13 FANUC如何向系统输入输入程序
  13. 服务器防火墙部分指令
  14. 多维特征输入,多层神经网络学习
  15. Matlab 蒙特卡洛求解三门问题
  16. access工资明细表_利用ACCESS数据库报表功能制作工资条
  17. APICloud教程
  18. 仅以此纪念我一波三十折的2022保研路--上岸华科网安直博
  19. 一条update语句更新多条sql记录
  20. 51单片机最小系统板制作过程

热门文章

  1. 基于Java毕业设计影片租赁系统源码+系统+mysql+lw文档+部署软件
  2. 世界上最经典25句话
  3. antd tabe表头出现遮挡
  4. webpack之Scope Hoisting
  5. #1024#番外篇科普为什么1024是程序员日?2020年10月24日,程序员为啥都不放假?
  6. Zipkin/Brave 整合Spring-MVC 框架实战
  7. ad频谱分析 matlab_使用Matlab对采样数据进行频谱分析
  8. chrome dev左上角音频窗口弹出的问题,键盘控制声音
  9. C语言的2个小例子,搞明白数组不再迷茫
  10. java嵌套循环例子:假定公鸡5元钱1只,母鸡3元钱1只,小鸡1元钱3只。现在有100元钱要求买100只鸡,请编程列出所有可能的购鸡方案。