机器学习和ai哪个好

Ah, sportsball. Can you ever forget the first time you grab that pass, fly down the court, and sink that puck right through the net as your fans yell adoringly from the bleachers, TOUCHDOWN!

啊, 运动球 。 您是否能忘记第一次获得通行证,在球场上飞下来,如何在网中下沉的球,而您的球迷却从看台上大喊大叫“ TOUCHDOWN”!

No? Not ringing a bell? Me neither. That’s what you get when you spend your high school years learning calculus and icing pi on cookie cakes instead of doing sports.

没有? 不打铃吗? 我也不。 这就是您在高中阶段学习微积分并为饼干蛋糕锦上添花而不是做运动时得到的。

How many friends do you think this made me?您认为这让我有多少朋友?

It’s time you never get back. Unless, of course, you figure out a way to use that high school math to become a better athlete.

是时候你永远不会回来了。 当然,除非您找到使用高中数学成为更好的运动员的方法。

Which is what we’ll be looking at today! In this post, I’ll show you how to use Machine Learning to analyze your performance in your sport of choice (as an example, I’ll be using my tennis serve, but you can easily adopt the technique to other games). By the way, this project was inspired by my recent interview with Zack Akil, who used the same techniques to analyze penalty kicks in soccer.

今天是我们要看的! 在这篇文章中,我将向您展示如何使用机器学习来分析您选择的运动项目中的表现(例如,我将使用网球发球机,但您可以轻松地将该技术应用于其他游戏)。 顺便说一下,这个项目的灵感来自于我最近对扎克·阿基尔(Zack Akil)的采访 ,他使用相同的技术来分析足球中的罚球。

Making With Machine Learning episode about tracking penalty kicks with AI.
借助机器学习制作一集有关使用AI跟踪罚球的信息。

Already, machine learning plays a role in sports: companies use it to identify players’ unique talents, detect injuries earlier, and broker optimal player trades. Plus, almost every professional sport (NFL, NHL, MLB, NBA, soccer, golf, cricket, to name a few) uses ML technology for tracking. The NBA, for example, has deployed a sophisticated vision-based system all on courts, tracking players’ motions, reading numbers off their jerseys, analyzing how fast they pass the ball, and determining how accurately they shoot under pressure.

机器学习已经在体育运动中发挥着作用:公司已经使用它来识别运动员的独特才能,及早发现伤病并促进最佳运动员交易。 另外,几乎所有职业运动(NFL,NHL,MLB,NBA,足球,高尔夫,板球等等)都使用ML技术进行跟踪。 例如,NBA已经在球场上部署了一套复杂的基于视觉的系统,可以追踪球员的动作,读取球衣号码,分析他们传球的速度以及确定他们在压力下射击的准确性。

But as a beginner, I’d love to use that same technology simply to tell me what I’m doing wrong and where I’m making mistakes. Ideally, I’d build an app that I set up on my phone on my tripod (for example) while I’m on the tennis court that analyzes video of me serving and gives me helpful tips (i.e. “straighten your arm,” “bend your knees”). In this post, I’ll show you the core techniques that would make an app like that possible.

但是,作为一个初学者,我很想使用相同的技术来告诉我我在做错什么,在哪里犯错。 理想情况下,我会在网球场上构建一个应用程序,该应用程序可以在手机上安装在三脚架上(例如),该应用程序可以分析我服务的视频并提供有用的提示(例如,“伸直手臂”,“弯曲你的膝盖”)。 在这篇文章中,我将向您展示使这种应用成为可能的核心技术。

Want to jump straight to the code? Check out the code on Github.

想直接跳转到代码吗? 在Github上 查看 代码

使用机器学习分析我的网球服务 (Using Machine Learning to Analyze My Tennis Serve)

A few weeks ago, I went to a tennis court, set up a tripod, and captured some footage of me serving a tennis ball. I sent it to my friend JT, a tennis coach, and asked him what I was doing wrong. He sent me back a bunch of side-by-side photos of me compared with professional tennis players, and pointed out all the places we differed — the whole trajectory of my arm and the angle of my elbow was way off.

几周前,我去了网球场,架了一个三脚架,并拍摄了一些我在打网球的镜头。 我将其发送给我的网球教练朋友JT,问他我做错了什么。 他给我发了很多与我职业网球选手比较的并排照片,并指出了我们所处的所有不同之处–我的手臂的整个运动轨迹和我的肘部角度都相去甚远。

What JT had done was useful — he analyzed key parts of my serve that differed from those of professional athletes. Wouldn’t it be neat if a machine learning model could do the same thing? Compare your performance with professionals and let you know what you’re doing differently?

JT所做的工作很有用-他分析了我服务中与职业运动员不同的关键部分。 机器学习模型可以做同样的事情不是很好吗? 将您的表现与专业人士进行比较,让您知道自己在做什么?

With JT’s feedback in hand, I decided to focus on three facets of serving:

有了JT的反馈,我决定专注于服务的三个方面:

  1. Were my knees bent as I served?服务时我的膝盖弯曲了吗?
  2. Was my arm straight when I hit the ball?当我击球时手臂伸直了吗?
  3. How fast did the ball actually travel after I hit it? (This one was just for my personal interest)我击球后,球实际走了多快? (这只是出于我的个人利益)

通过姿势检测分析姿势 (Analyzing Posture with Pose Detection)

To compute the angle of my knees and arms, I decided to use pose detection-a machine learning technique that analyzes photos or videos of humans and tries to locate their body parts. There are lots of tools you can use to do pose detection (like TensorFlow.js), but for this project, I wanted to try out the new Person Detection (beta!) feature of the Google Cloud Video Intelligence API. (You might recognize this API from my AI-Powered Video Archive, where I used it to analyze objects, text, and speech in my family videos.) The Person Detection feature recognizes a whole bunch of body parts, facial features, and clothing. From the docs:

为了计算膝盖和手臂的角度,我决定使用姿势检测(一种机器学习技术,该技术可以分析人类的照片或视频并尝试定位其身体部位)。 您可以使用很多工具来进行姿势检测(例如TensorFlow.js ),但是对于这个项目,我想尝试使用Google Cloud Video Intelligence API的新“人检测”(测试版!)功能。 (您可能会从我的AI视频录像档案库中识别出该API,在这里我用它来分析家庭录像中的对象,文本和语音。)“人物检测”功能可识别整个身体部位,面部特征和衣服。 从文档 :

To start, I clipped the video of my tennis serves down to just the sections where I was serving. Since I only caught 17 serves on camera, this took me about a minute. Next, I uploaded the video to Google Cloud Storage and ran it through the Video Intelligence API. All of that code is conveniently documented in a Colab notebook which you can run yourself on your own video (you’ll just need a Google Cloud account). The notebook even shows you how to set up authentication and create buckets and all that jazz. The interesting bit-analyzing pose-is this bit:

首先,我将网球发球的视频剪辑到了我发球的部分。 由于我只在相机上抓到17个发球区,因此花了我大约一分钟的时间。 接下来,我将视频上传到Google Cloud Storage,并通过Video Intelligence API运行了该视频。 所有这些代码都方便地记录在Colab笔记本中 ,您可以在自己的视频中运行自己(您只需要一个Google Cloud帐户)。 笔记本甚至向您展示了如何设置身份验证,创建存储桶以及所有爵士乐。 有趣的位分析姿势是这个位:

def detect_person(input_uri, output_uri):    """Detects people in a video."""    client = videointelligence.VideoIntelligenceServiceClient(credentials=service_account.Credentials.from_service_account_file(    './key.json'))    # Configure the request    config = videointelligence.types.PersonDetectionConfig(        include_bounding_boxes=True,        include_attributes=True,        include_pose_landmarks=True,    )    context = videointelligence.types.VideoContext(person_detection_config=config)    # Start the asynchronous request    operation = client.annotate_video(        input_uri=input_uri,        output_uri=output_uri,        features=[videointelligence.enums.Feature.PERSON_DETECTION],        video_context=context,    )    return operation

To call the API, you pass the location in Cloud Storage where your video is stored as well as a destination in cloud storage where the Video Intelligence API can write the results.

要调用该API,您需要在Cloud Storage中存储视频的位置以及Video Intelligence API可以写入结果的云存储中的目标。

Here, I’m calling the asynchronous version of the Video Intelligence API. It analyzes video on Google’s backend, in the cloud, even after my notebook is closed. This is convenient for long videos, but there’s also a synchronous and streaming version of this API!

在这里,我正在调用视频智能API的异步版本。 即使关闭了笔记本,它也可以在Google后端的云中分析视频。 这对于长视频来说很方便,但是该API还有同步和流式版本!

When the Video Intelligence API finished analyzing my video, I visualized the results using this neat tool built by @wbobeirne. It spits out neat visualization videos like this:

当Video Intelligence API完成对我的视频的分析后,我使用了@wbobeirne构建的简洁工具来可视化结果。 它吐出像这样的简洁可视化视频:

Using the Vision API to visualize my posture during a serve.
使用Vision API在发球时可视化我的姿势。

Pose detection makes a great pre-processing step for training machine learning models. For example, I could use the output of the API (the position of my joints over time) as input features to a second machine learning model that tries to predict (for example) whether or not I’m serving, or whether or not my serve will go over the net. But for now, I want to do something much simpler: analyze my serve with high school math!

姿势检测是训练机器学习模型的重要预处理步骤。 例如,我可以将API的输出(关节在一段时间内的位置)用作第二个机器学习模型的输入特征,该模型试图预测(例如)我是否在服务,或者我是否在服务服务将通过网络。 但是现在,我想做一些简单的事情:用中学数学分析我的发球!

For starters, I plotted the y position of my left and right wrists over time:

首先,我绘制了我的左手和右手腕随时间变化的y位置:

The height of my wrists (in pixels) over time
我的手腕高度(以像素为单位)随时间变化

It might look messy, but that data actually shows pretty clearly the lifetime of a serve. The blue line shows the position of my left wrist, which peaks as I throw the tennis ball a few seconds before I hit it with my racket (the peak in the right wrist, or orange line).

看起来有些混乱,但该数据实际上清楚地显示了服务的生命周期。 蓝线表示我的左手腕的位置,当我用网球拍击打网球几秒钟时,它的峰值达到峰值(右手腕的峰值或橙色线)。

Using this data, I can tell pretty accurately at what points in time I’m throwing the ball and hitting it. I’d like to align that with the angle my elbow is making as I hit the ball. To do that, I’ll have to convert the output of the Video Intelligence API-raw pixel locations-to angles. How do you do that? The Law of Cosines, duh! (Just kidding, I definitely forgot this and had to look it up. Here’s a great explanation and some Python code.)

使用这些数据,我可以非常准确地告诉我在什么时候丢球并击中球。 我想将其与我击球时肘部形成的角度对齐。 为此,我必须将Video Intelligence API的输出(原始像素位置)转换为角度。 你是怎样做的? 余弦定律, du ! (开个玩笑,我肯定忘了这个,必须查一下。这是一个很好的解释和一些Python代码。)

The Law of Cosines is the key to converting points in space to angles. In code, that looks something like:

余弦定律是将空间中的点转换为角度的关键。 在代码中,类似于:

class Point:  def __init__(self, x, y):    self.x = x    self.y = y 

def getAngle(a, b, c):    ang = math.degrees(math.atan2(c.y-b.y, c.x-b.x) - math.atan2(a.y-b.y, a.x-b.x))    return ang

 def computeElbowAngle(row, which='right'):  wrist = Point(row[f'{which}_wrist_x'], row[f'{which}_wrist_y'])  elbow = Point(row[f'{which}_elbow_x'], row[f'{which}_elbow_y'])  shoulder = Point(row[f'{which}_shoulder_x'], row[f'{which}_shoulder_y'])  return getAngle(wrist, elbow, shoulder)def computeShoulderAngle(row, which='right'):  elbow = Point(row[f'{which}_elbow_x'], row[f'{which}_elbow_y'])  shoulder = Point(row[f'{which}_shoulder_x'], row[f'{which}_shoulder_y'])  hip = Point(row[f'{which}_hip_x'], row[f'{which}_hip_y'])  return getAngle(hip, shoulder, elbow)def computeKneeAngle(row, which='right'):  hip = Point(row[f'{which}_hip_x'], row[f'{which}_hip_y'])  knee = Point(row[f'{which}_knee_x'], row[f'{which}_knee_y'])  ankle = Point(row[f'{which}_ankle_x'], row[f'{which}_ankle_y'])  return getAngle(ankle, knee, hip)

Check out the notebook to see all the details. Using these formulae, I plotted the angle of my elbow over time:

签出笔记本以查看所有详细信息。 使用以下公式,绘制了肘部随时间的角度:

The angle of my right elbow over time
我的右肘随着时间的角度

By aligning the height of my wrist and the angle of my elbow, I was able to determine the angle was around 120 degrees (not straight!). If JT hadn’t told me what to look for, it would have been nice for an app to catch that my arm angle was different from professionals and let me know.

通过调整手腕的高度和肘部的角度,我能够确定该角度约为120度(不是笔直!)。 如果JT没有告诉我要寻找什么,那么对于一个应用程序来说,发现我的手臂角度不同于专业人士并告诉我,那将是很好的。

I used the same formula to calculate the angles of my knees and shoulders. Again, check out more details in the notebook.

我使用相同的公式来计算膝盖和肩膀的角度。 同样,在笔记本中查看更多详细信息。

计算我的服务速度 (Computing the Speed of My Serve)

Pose detection let me compute the angles of my body, but I also wanted to compute the speed of the ball after I hit it with my racket. To do that, I had to be able to track the tiny, speedy little tennis ball over time.

姿势检测可以让我计算出身体的角度,但是我还想计算出用球拍击中球后的速度。 为此,我必须能够随着时间的推移跟踪微小,快速的小网球。

As you can see here, the tennis ball was sort of hard to identify because it was blurry and far away.如您在这里看到的,网球很难辨认,因为它很模糊并且很远

I handled this the way Zack did in his Football Pier project: I trained a custom AutoML Vision model.

我按照Zack在他的Football Pier项目中所做的方式来处理:我训练了自定义的AutoML Vision模型。

If you’re not familiar with AutoML Vision, it’s a no-code way to build computer vision models using deep neural networks. The best part is, you don’t have to know anything about ML to use it! The worst part is the cost. It’s pricey (more on that in a minute).

如果您不熟悉AutoML Vision ,这是使用深度神经网络构建计算机视觉模型的无代码方法。 最好的部分是,您无需了解ML就可以使用它! 最糟糕的部分是成本。 价格昂贵(一分钟内会介绍更多信息)。

AutoML Vision lets you upload your own labeled data (i.e. with labeled tennis balls) and trains a model for you.

AutoML Vision使您可以上传自己的标签数据(即带有标签的网球)并为您训练模型。

使用AutoML Vision训练对象检测模型 (Training an Object Detection Model with AutoML Vision)

To get started, I took a thirty second clip of me serving and split it into individual pictures I could use as training data to a vision model:

首先,我拍摄了一份三十秒钟的片段,将其分割成单独的图片,可用作视觉模型的训练数据:

ffmpeg -i filename.mp4 -vf fps=10 -ss 00:00:01 -t 00:00:30 tmp/snapshots/%03d.jpg

You can run that command from within the notebook I provided, or from the command line if you have ffmpeg installed. It takes an mp4 and creates a bunch of snapshots (here at fps=20, i.e. 20 frames per second) as jpgs. The -ss flag controls how far into the video the snapshots should start (i.e. start "seeking" at 1 second) and the flag -t controls how many seconds should be included (30 in this case).

您可以从我提供的笔记本中运行该命令,或者如果已安装ffmpeg,则可以从命令行运行该命令。 它需要一个mp4并创建一堆快照(此处为fps = 20,即每秒20帧)作为jpg。 -ss标志控制快照应从多远处开始(即在1秒钟开始“搜索”),标志-t控制应包含多少秒(在这种情况下为30秒)。

Once you’ve got all your snapshots created, you can upload them to Google Cloud storage with the command:

创建完所有快照后,可以使用以下命令将其上传到Google Cloud存储中:

gsutil mb gs://my_neat_bucket  # create a new bucketgsutil cp tmp/snapshots/* gs://my_neat_bucket/snapshots

Next, navigate to the Google Cloud console and select Vision from the left hand menu:

接下来,导航到Google Cloud控制台,然后从左侧菜单中选择Vision

Create a new AutoML Vision Model and import your photos.

创建一个新的AutoML Vision模型并导入照片 。

Quick recap: what’s a Machine Learning classifier? It’s a a type of model that learns how to label things from example. So to train our own AutoML Vision model, we’ll need to provide some labeled training data for the model to learn from.

快速回顾:什么是机器学习分类器? 这是一种模型,可以学习如何从示例中标记事物。 因此,要训练自己的AutoML Vision模型,我们需要提供一些标记的训练数据供模型学习。

Once your data has been uploaded, you should see it in the AutoML Vision “IMAGES” tab:

数据上传后,您应该在AutoML Vision的“图像”选项卡中看到它:

Here, you can start applying labels. Click into an image. In the editing view (below), you’ll be able to click and drag a little bounding box:

在这里,您可以开始应用标签。 单击进入图像。 在下面的编辑视图中,您可以单击并拖动一个小边界框:

Gif of the AutoML Vision Data Labeling InterfaceAutoML Vision数据标记界面的Gif

Congratulations, you have just begun a long and rewarding career as a data labeler. Next stop, MIT!

恭喜,您刚刚开始漫长而有意义的职业,担任数据标签师。 下一站,麻省理工!

For my model, I hand-labeled about 300 images which took me ~30 minutes. Once you’re done labeling data, it’s just one click to actually train a model with AutoML-just click the “Train New Model” button and wait.

对于我的模型,我手工标记了大约300张图像,这花了我大约30分钟的时间。 标记完数据后,只需单击一下即可使用AutoML实际训练模型-只需单击“训练新模型”按钮并等待。

When your model is done training, you’ll be able to evaluate its quality in the “Evaluate” tab below.

模型训练完成后,您可以在下面的“评估”标签中评估其质量。

As you can see, my model was pretty darn accurate, with about 96% precision and recall-hot dog!

如您所见,我的模型非常准确,准确率约96%,召回热狗!

This was more than enough to be able to track the position of the ball in my pictures, and therefore calculate its speed:

这足以跟踪我的图片中球的位置,从而计算出它的速度:

It’s very tiny here, but you can see that tiny bounding box tracking the tennis ball.
它在这里很小,但是您可以看到那个跟踪网球的小包围盒。

Once you’ve trained your model, you can use the code in Jupyter notebook to make a cute ‘lil video like the one I plotted above.

训练好模型后,您可以使用Jupyter笔记本中的代码制作一个可爱的'lil视频,就像我上面绘制的一样。

You can then use this to plot the position of the ball over time, to calculate speed (see the notebook for more details):

然后,您可以使用它来绘制随时间变化的球的位置,以计算速度(有关更多详细信息,请参阅笔记本):

y position of the tennis ball over time
网球随着时间的y位置

Unfortunately, I realized too late I’d made a grave mistake here. What is speed? Change in distance over time, right? But because I didn’t actually know the distance between me, the player, and the camera, I couldn’t compute distance in miles or meters-only pixels! So I learned I serve the ball at approximately 200 pixels per second. Nice.

不幸的是,我意识到我在这里犯了一个严重的错误,为时已晚。 什么是速度? 距离会随着时间变化,对吗? 但是因为我实际上并不知道我,播放器和相机之间的距离,所以我无法以英里或仅米的像素来计算距离! 因此我得知我以大约每秒200像素的速度发球。 很好

*Since I wrote this post, some folks have fold me I should have approximated the distances by using the (known) size of a tennis ball. Sounds like a good idea to me!

*自从我写了这篇文章以来,有些人已经折叠了我,我应该使用网球的(已知)尺寸估算出距离。 听起来对我来说是个好主意!

So there you have it — some techniques you can use to build your own sports machine learning trainer app!

因此,您已掌握了一些技巧,可以用来构建自己的运动机器学习教练应用程序!

费用说明 (A Note on Cost)

Disclaimer: I work for Google, and I use Google Cloud for free. I try to recommend free tools here whenever possible, but I turn to GCP by instinct, and sometimes I don’t notice the cost.

免责声明:我为Google工作,并且免费使用Google Cloud。 我会尽可能在这里推荐免费工具,但本能地转向GCP,有时我并不注意花费。

Whelp, when it came to AutoML Vision, that turned out to not be a great idea. Here’s what this project cost me:

对于AutoML Vision,Whelp并不是一个好主意。 这是我为此项目付出的代价:

The whole thing was about ~450 bucks-_ouch_. But, before you get entirely turned off by the $$, let’s break things down:

整个事情大约是450块钱。 但是,在完全忽略$$之前,让我们分解一下:

I trained two AutoML models, and the cost of training for both was $91.11. Sure, that’s pricey, but the quality was pretty high and maybe for certain business use cases, it makes sense.

我训练了两个AutoML模型,这两个模型的训练成本均为91.11美元。 当然,这很昂贵,但是质量很高,也许对于某些业务用例来说,这是有道理的。

The real cost comes from that first line item-AutoML Image Object Detection Online Prediction. What’s that? It’s the cost Google charges for hosting your model for you in the cloud, so that you can call it with a standard REST API. Weirdly, you’re continually charged for the cost of hosting this model, even if you’re not making predictions against it, which really makes the cost rack up fast.

实际成本来自该第一行项目-AutoML图像对象检测在线预测。 那是什么? 这是Google在云中为您托管模型的费用,因此您可以使用标准REST API进行调用。 奇怪的是,即使您没有针对该模型进行预测,您仍要继续为托管该模型所花费的费用,这确实使成本Swift上升。

The good news is that AutoML Vision actually runs in three ways:

好消息是,AutoML Vision实际上以三种方式运行:

  1. You can configure a model to be hosted in the cloud, where you can hit it at a REST endpoint at any time (most expensive).您可以将模型配置为托管在云中,在任何时候都可以在REST端点上访问它(最昂贵)。
  2. You can use it in batch mode only (predictions are run in an asynchronous fashion, not for real-time use cases), which wipes out most of that additional cost.您只能在批处理模式下使用它(预测以异步方式运行,而不是用于实时用例),这可以消除大部分额外费用。
  3. You can actually train your model to be exportable, allowing you to download it as a TensorFlow model and use it offline. This also brings down the cost significantly.您实际上可以训练模型可导出,允许您将其下载为TensorFlow模型并离线使用。 这也大大降低了成本。
  4. Or, you can forgo AutoML altogether and brave it on your own with TensorFlow or PyTorch. Good luck-and let me know what you choose!或者,您可以完全放弃AutoML并通过TensorFlow或PyTorch自己勇敢地使用它。 祝您好运-让我知道您选择了什么!

Originally published at https://daleonai.com on July 7, 2020.

最初于 2020年7月7日 发布在 https://daleonai.com

翻译自: https://towardsdatascience.com/can-ai-make-you-a-better-athlete-using-machine-learning-to-analyze-tennis-serves-and-penalty-kicks-f9dd225cea49

机器学习和ai哪个好


http://www.taodudu.cc/news/show-1873989.html

相关文章:

  • ocr tesseract_OCR引擎之战— Tesseract与Google Vision
  • 游戏行业数据类丛书_理论丛书:高维数据101
  • tesseract box_使用Qt Box Editor在自定义数据集上训练Tesseract
  • 人脸检测用什么模型_人脸检测模型:使用哪个以及为什么使用?
  • 不洗袜子的高文博_那个孩子在夏天中旬用高袜子大笑?
  • word2vec字向量_Anything2Vec:将Reddit映射到向量空间
  • ai人工智能伪原创_AI伪科学与科学种族主义
  • ai人工智能操控什么意思_为什么要建立AI分散式自治组织(AI DAO)
  • 机器学习cnn如何改变权值_五个机器学习悖论将改变您对数据的思考方式
  • DeepStyle(第2部分):时尚GAN
  • 肉体之爱的解释圣经_可解释的AI的解释
  • 机器学习 神经网络 神经元_神经网络如何学习?
  • 人工智能ai应用高管指南_理解人工智能指南
  • 机器学习 决策树 监督_监督机器学习-决策树分类器简介
  • ai人工智能数据处理分析_建立数据平台以实现分析和AI驱动的创新
  • 极限学习机和支持向量机_极限学习机的发展
  • 人工智能时代的危机_AI信任危机:如何前进
  • 不平衡数据集_我们的不平衡数据集
  • 建筑业建筑业大数据行业现状_建筑—第4部分
  • 线性分类模型和向量矩阵求导_自然语言处理中向量空间模型的矩阵设计
  • 离散数学期末复习概念_复习第1部分中的基本概念
  • 熵 机器学习_理解熵:机器学习的金标准
  • heroku_如何通过5个步骤在Heroku上部署机器学习UI
  • detr 历史解析代码_视觉/ DETR变压器
  • 人工神经网络方法学习步长_人工神经网络-一种直观的方法第1部分
  • 机器学习 声音 分角色_机器学习对儿童电视节目角色的痴迷
  • 遗传算法是一种进化算法_一种算法的少量更改可以减少种族主义的借贷
  • 无监督模型 训练过程_监督使用训练模型
  • 端到端车道线检测_弱监督对象检测-端到端培训管道
  • feynman1999_AI Feynman 2.0:从数据中学习回归方程

机器学习和ai哪个好_AI可以使您成为更好的运动员吗? 使用机器学习分析网球发球和罚球...相关推荐

  1. ai与虚拟现实_AI医疗的神话与现实

    ai与虚拟现实 Editor's Note: AI has had a transformative effect on many industries, the healthcare industr ...

  2. 驱动AI产业“第二增长曲线”,清华系RealAI发布首个企业级隐私保护机器学习平台与升级版AI模型杀毒软件

    12月9日,由清华大学人工智能研究院.北京智源人工智能研究院.北京瑞莱智慧科技有限公司联合主办的"2020第三代人工智能产业论坛暨瑞莱智慧RealAI战略发布会"在北京召开.清华大 ...

  3. ai黑白棋_AI的黑白镜

    ai黑白棋 The once thought science-fiction fantasy of robots eventually replacing humans has become more ...

  4. 人力资源机器_人力资源部门的机器学习和AI

    人力资源机器 介绍 (Introduction) Deep learning and AI have been drastically changing industries such as heal ...

  5. 移动应用AI化成新战场?详解苹果最新Core ML模型构建基于机器学习的智能应用...

    Google刚刚息鼓,苹果又燃战火!这一战,来自移动应用的AI化之争. 近日,苹果发布专为移动端优化的Core ML后,移动开发者对此的需求到底有多强烈?去年大获成功的AI应用Prisma又能告诉我们 ...

  6. 机器学习大神迈克尔 · 乔丹:我讨厌将机器学习称为AI

    AI技术年度盛会即将开启!11月8-9日,来自Google.Amazon.微软.Facebook.LinkedIn.阿里巴巴.百度.腾讯.美团.京东.小米.字节跳动.滴滴.商汤.旷视.思必驰.第四范式 ...

  7. 迈克尔 · 乔丹:我讨厌将机器学习称为AI

    编译 | 人工智能头条(公众号ID:AI_Thinker) 参与 | reason_W 上月,由 Michael I.Jordan .Jeff Dean.李飞飞.LeCun 等多位人工智能领域的大牛发 ...

  8. 关于机器学习和AI的区别最经典的解释

    关于机器学习和AI的区别最经典的解释 互联网和移动互联网兴起后,各种经典段子满天飞.很多段字反映出段子手很有才.这不,关于机器学习与人工智能(AI)的区别,最近有一个段字红爆业界: 翻译成中文就是,机 ...

  9. 机器学习和AI的区别是什么?| 今日吐槽

    机器学习和AI的区别: 如果使用Python写的,那可能是机器学习 如果使用PPT写的,那可能是AI-- 上面这个段子来自微软工程师Mat Velloso,他发在Twitter上之后,引发大量用户转发 ...

  10. 什么是机器学习?机器学习与AI的关系?

    我们先考虑一个任务:从照片中识别出某位友人.对大多数人来说,就算照片光线不足,或是友人刚理过发,或是换了新衣服,要认出他来也是轻而易举的事.但若是要在计算机上编程来解决这个问题,应当如何开始呢?恐怕谁 ...

最新文章

  1. 如何看懂照片的直方图
  2. DeepLearning:tensorflow 参数初始化和参数保存
  3. poj 3261 Milk Patterns 后缀数组 最长重复子串
  4. 单例模式——Java
  5. 使用json-lib进行Java和JSON之间的转换
  6. Windows服务无法引用.dll的错误
  7. oracle的rac环境,RAC环境数据库的备份
  8. 选课中应用Redis一些思考
  9. 【Word】关于Word文档写作中遇到的一些问题
  10. springboot 文件上传大小配置
  11. java实现ssh_使用纯Java实现一个WebSSH项目
  12. oracle detele,Oracle中,一个Delete操作的流程
  13. 坐标上海,我看见这群开发者用热爱改变世界
  14. Windows如何使用自带的桌面整理工具?
  15. windows terminal ssh连接
  16. uni-app中uni-ui组件库的使用
  17. [0520更新]雷达原理【部分]答案 陈伯孝
  18. 演绎另类黑客马拉松,机智云中国第二届智能硬件36小时开发大赛完美收官
  19. python--字典、列表
  20. Eureka的UNKNOWN

热门文章

  1. 关于前几周项目进行的一些感受
  2. Ajax异步刷新,测试用户名是否被注册
  3. ubuntu14在kDE界面下的关于eclipse提示框黑色背景的修改!
  4. PureLayout,使用纯代码写AutoLayout
  5. Problem D: 平面划分
  6. SQL Cookbook(读书笔记)No.2
  7. ASP.NET2.0一次发送多封邮件
  8. esp8266实验:搭建最小系统,刷nodemcu固件,dht11温度读取并上传服务器
  9. 2020-11-13 02_计算机视觉基础
  10. 《图解算法》学习笔记之快速排序