利用深度学习进行交通灯识别

by David Brailovsky

戴维·布雷洛夫斯基(David Brailovsky)

通过深度学习识别交通信号灯 (Recognizing Traffic Lights With Deep Learning)

我如何在10周内学习深度学习并赢得5,000美元 (How I learned deep learning in 10 weeks and won $5,000)

I recently won first place in the Nexar Traffic Light Recognition Challenge, computer vision competition organized by a company that’s building an AI dash cam app.

我最近在Nexar交通灯识别挑战赛中获得了第一名,该挑战赛由一家正在构建AI dash cam应用程序的公司组织的计算机视觉竞赛。

In this post, I’ll describe the solution I used. I’ll also explore approaches that did and did not work in my effort to improve my model.

在这篇文章中,我将描述我使用的解决方案。 我还将探索在改进模型方面有效的方法和无效的方法。

Don’t worry — you don’t need to be an AI expert to understand this post. I’ll focus on the ideas and methods I used as opposed to the technical implementation.

不用担心-您无需成为AI专家即可了解此文章。 我将重点介绍与技术实现相反的想法和方法。

挑战 (The challenge)

The goal of the challenge was to recognize the traffic light state in images taken by drivers using the Nexar app. In any given image, the classifier needed to output whether there was a traffic light in the scene, and whether it was red or green. More specifically, it should only identify traffic lights in the driving direction.

挑战的目标是识别驾驶员使用Nexar应用程序拍摄的图像中的交通灯状态。 在任何给定图像中,分类器都需要输出场景中是否有交通信号灯,以及它是红色还是绿色。 更具体地说,它应仅识别行驶方向上的交通信号灯。

Here are a few examples to make it clearer:

这里有一些例子可以使它更清楚:

The images above are examples of the three possible classes I needed to predict: no traffic light (left), red traffic light (center) and green traffic light (right).

上面的图像是我需要预测的三种可能类别的示例:无交通灯(左),红色交通灯(中)和绿色交通灯(右)。

The challenge required the solution to be based on Convolutional Neural Networks, a very popular method used in image recognition with deep neural networks. The submissions were scored based on the model’s accuracy along with the model’s size (in megabytes). Smaller models got higher scores. In addition, the minimum accuracy required to win was 95%.

挑战要求解决方案基于卷积神经网络 ,这是一种在深层神经网络的图像识别中非常流行的方法。 根据模型的准确性以及模型的大小(以兆字节为单位)对提交的内容进行评分。 较小的模型得分较高。 此外,获胜所需的最低准确性为95%。

Nexar provided 18,659 labeled images as training data. Each image was labeled with one of the three classes mentioned above (no traffic light / red / green).

Nexar提供了18659张标签图像作为训练数据。 每个图像都标记有上述三个类别之一(无交通信号灯/红色/绿色)。

软硬件 (Software and hardware)

I used Caffe to train the models. The main reason I chose Caffe was because of the large variety of pre-trained models.

我用Caffe训练模型。 我选择Caffe的主要原因是因为有各种各样的预训练模型。

Python, NumPy & Jupyter Notebook were used for analyzing results, data exploration and ad-hoc scripts.

Python,NumPy和Jupyter Notebook用于分析结果,数据浏览和即席脚本。

Amazon’s GPU instances (g2.2xlarge) were used to train the models. My AWS bill ended up being $263 (!). Not cheap. ?

Amazon的GPU实例(g2.2xlarge)用于训练模型。 我的AWS账单最终变成了263美元 (!)。 不便宜。 ?

The code and files I used to train and run the model are on GitHub.

我用于训练和运行模型的代码和文件位于GitHub上 。

最终分类器 (The final classifier)

The final classifier achieved an accuracy of 94.955% on Nexar’s test set, with a model size of ~7.84 MB. To compare, GoogLeNet uses a model size of 41 MB, and VGG-16 uses a model size of 528 MB.

最终分类器在Nexar测试集上的准确度达到94.955% ,模型大小约为7.84 MB 。 作为比较, GoogLeNet使用的模型大小为41 MB,而VGG-16使用的模型大小为528 MB。

Nexar was kind enough to accept 94.955% as 95% to pass the minimum requirement ?.

耐克萨斯(Nexar)愿意接受94.955%作为95%,以通过最低要求?。

The process of getting higher accuracy involved a LOT of trial and error. Some of it had some logic behind it, and some was just “maybe this will work”. I’ll describe some of the things I tried to improve the model that did and didn’t help. The final classifier details are described right after.

获得更高准确性的过程涉及大量的反复试验。 其中一些背后有一些逻辑,而有些只是“也许会行得通”。 我将描述一些我试图改进模型的事情,这些模型确实有帮助,但没有帮助。 最后的分类器详细信息将在后面描述。

什么有效? (What worked?)

转移学习 (Transfer learning)

I started off with trying to fine-tune a model which was pre-trained on ImageNet with the GoogLeNet architecture. Pretty quickly this got me to >90% accuracy! ?

我首先尝试微调使用GoogLeNet架构在ImageNet上预训练的模型。 很快,这使我的准确性达到了90%以上! ?

Nexar mentioned in the challenge page that it should be possible to reach 93% by fine-tuning GoogLeNet. Not exactly sure what I did wrong there, I might look into it.

Nexar在挑战页面中提到,通过微调GoogLeNet,应该有可能达到93%。 不完全确定我在哪里做错了,我可能会调查一下。

挤压网 (SqueezeNet)

SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size.

SqueezeNet:AlexNet级精度,参数减少50倍,模型尺寸小于0.5MB。

Since the competition rewards solutions that use small models, early on I decided to look for a compact network with as few parameters as possible that can still produce good results. Most of the recently published networks are very deep and have a lot of parameters. SqueezeNet seemed to be a very good fit, and it also had a pre-trained model trained on ImageNet available in Caffe’s Model Zoo which came in handy.

由于竞赛奖励使用小模型的解决方案,因此我一开始决定寻找一个结构紧凑的网络,该网络中的参数应尽可能少,但仍能产生良好的结果。 最近发布的大多数网络都很深入,并且具有很多参数。 SqueezeNet似乎非常合适,它还具有在Caffe的Model Zoo中可用ImageNet训练的预训练模型,该模型非常有用。

The network manages to stay compact by:

网络设法通过以下方式保持紧凑:

  • Using mostly 1x1 convolution filters and some 3x3主要使用1x1卷积滤镜和一些3x3
  • Reducing number of input channels into the 3x3 filters减少进入3x3滤波器的输入通道数

For more details, I recommend reading this blog post by Lab41 or the original paper.

有关更多详细信息,我建议阅读Lab41撰写的此博客文章或原始文章 。

After some back and forth with adjusting the learning rate I was able to fine-tune the pre-trained model as well as training from scratch with good accuracy results: 92%! Very cool! ?

经过反复调整学习率后,我可以微调预训练的模型以及从头开始的训练,并获得良好的准确性:92%! 很酷! ?

旋转影像 (Rotating images)

Most of the images were horizontal like the one above, but about 2.4% were vertical, and with all kinds of directions for “up”. See below.

像上面的图像一样,大多数图像都是水平的,但是大约有2.4%是垂直的,并且所有方向都是“向上”。 见下文。

Although it’s not a big part of the data-set, I wanted the model to classify them correctly too.

尽管它不是数据集的主要部分,但我也希望模型也能对其进行正确分类。

Unfortunately, there was no EXIF data in the jpeg images specifying the orientation. At first I considered doing some heuristic to identify the sky and flip the image accordingly, but that did not seem straightforward.

不幸的是,在jpeg图像中没有指定方向的EXIF数据。 最初,我考虑过进行启发式搜索以识别天空并相应地翻转图像,但这似乎并不简单。

Instead, I tried to make the model invariant to rotations. My first attempt was to train the network with random rotations of 0°, 90°, 180°, 270°. That didn’t help ?. But when averaging the predictions of 4 rotations for each image, there was improvement!

相反,我试图使模型对旋转不变。 我的第一个尝试是使用0°,90°,180°,270°的随机旋转训练网络。 那没有帮助吗? 但是,当平均每个图像的4个旋转的预测时,有改进!

92% → 92.6% ?

92%→92.6%?

To clarify: by “averaging the predictions” I mean averaging the probabilities the model produced of each class across the 4 image variations.

为了澄清:通过“平均预测值”,我的意思是平均每个模型在4个图像变化中生成的概率。

作物过度采样 (Oversampling crops)

During training the SqueezeNet network first performed random cropping on the input images by default, and I didn’t change it. This type of data augmentation makes the network generalize better.

在训练期间,默认情况下,SqueezeNet网络首先对输入图像进行随机裁剪,而我没有进行更改。 这种类型的数据扩充使网络更好地推广。

Similarly, when generating predictions, I took several crops of the input image and averaged the results. I used 5 crops: 4 corners and a center crop. The implementation was free by using existing caffe code for this.

同样,在生成预测时,我对输入图像进行了多次裁剪,并对结果取平均值。 我使用了5种作物:4个角和一个中心作物。 通过使用现有的caffe代码 ,该实现是免费的。

92% → 92.46% ?

92%→92.46%?

Rotating images together with oversampling crops showed very slight improvement.

旋转图像以及对农作物的过度采样显示非常轻微的改善。

学习率较低的额外培训 (Additional training with lower learning rate)

All models were starting to overfit after a certain point. I noticed this by watching the validation-set loss start to rise at some point.

在某个点之后,所有模型都开始过拟合。 我注意到验证集损失在某个时候开始上升,从而注意到了这一点。

I stopped the training at that point because the model was probably not generalizing any more. This meant that the learning rate didn’t have time to decay all the way to zero. I tried resuming the training process at the point where the model started overfitting with a learning rate 10 times lower than the original one. This usually improved the accuracy by 0-0.5%.

那时我停止了培训,因为该模型可能不再推广了。 这意味着学习率没有时间衰减到零。 我尝试在模型开始过度拟合时恢复训练过程,学习率比原始模型低10倍。 通常,这将精度提高了0-0.5%。

更多训练数据 (More training data)

At first, I split my data into 3 sets: training (64%), validation (16%) & test (20%). After a few days, I thought that giving up 36% of the data might be too much. I merged the training & validations sets and used the test-set to check my results.

首先,我将数据分为3组:培训(64%),验证(16%)和测试(20%)。 几天后,我认为放弃36%的数据可能太多了。 我合并了训练和验证集,并使用测试集检查了我的结果。

I retrained a model with “image rotations” and “additional training at lower rate” and saw improvement:

我使用“图像旋转”和“较低速率的附加训练”对模型进行了重新训练,并看到了改进:

92.6% → 93.5% ?

92.6%→93.5%?

重新标记训练数据中的错误 (Relabeling mistakes in the training data)

When analyzing the mistakes the classifier had on the validation set, I noticed that some of the mistakes have very high confidence. In other words, the model is certain it’s one thing (e.g. green light) while the training data says another (e.g. red light).

当分析分类器在验证集上的错误时,我注意到一些错误具有很高的置信度。 换句话说,模型可以确定是一件事(例如绿灯),而训练数据表明是另一件事(例如红灯)。

Notice that in the plot above, the right-most bar is pretty high. That means there’s a high number of mistakes with >95% confidence. When examining these cases up close I saw these were usually mistakes in the ground-truth of the training set rather than in the trained model.

请注意,在上图中,最右边的条形相当高。 这意味着存在大量错误,置信度超过95%。 当仔细检查这些案例时,我发现这些通常是训练集真实情况的错误,而不是训练模型中的错误。

I decided to fix these errors in the training set. The reasoning was that these mistakes confuse the model, making it harder for it to generalize. Even if the final testing-set has mistakes in the ground-truth, a more generalized model has a better chance of high accuracy across all the images.

我决定在培训集中解决这些错误。 理由是这些错误混淆了模型,使其难以推广。 即使最终测试集的真实性有误,也可以使用更通用的模型来更好地把握所有图像的准确性。

I manually labeled 709 images that one of my models got wrong. This changed the ground-truth for 337 out of the 709 images. It took about an hour of manual work with a python script to help me be efficient.

我手动将709个图像标记为我的一个模型出错。 这改变了709张图像中337张的真实性。 使用python脚本花了大约一个小时的手动工作来帮助我提高效率。

Above is the same plot after re-labeling and retraining the model. Looks better!

上面是重新标记和重新训练模型后的同一图。 看起来更好!

This improved the previous model by:

通过以下方式改进了先前的模型:

93.5% → 94.1% ✌️

93.5%→94.1%✌️

模型集合 (Ensemble of models)

Using several models together and averaging their results improved the accuracy as well. I experimented with different kinds of modifications in the training process of the models involved in the ensemble. A noticeable improvement was achieved by using a model trained from scratch even though it had lower accuracy on its own together with the models that were fine-tuned on pre-trained models. Perhaps this is because this model learned different features than the ones that were fine-tuned on pre-trained models.

一起使用多个模型并对其结果取平均值也可以提高准确性。 我对集成中涉及的模型的训练过程进行了各种修改。 通过使用从头开始训练的模型,即使它本身的精度较低,并且与在预先训练的模型上进行微调的模型一起使用,也可以实现明显的改进。 也许是因为该模型学习的功能与在预训练模型上微调的功能不同。

The ensemble used 3 models with accuracies of 94.1%, 94.2% and 92.9% and together got an accuracy of 94.8%. ?

该合奏使用了3个模型,其准确度分别为94.1%,94.2%和92.9%,加在一起的准确度为94.8%。 ?

什么没用? (What didn’t work?)

Lots of things! ? Hopefully some of these ideas can be useful in other settings.

很多事情! ? 希望这些想法中的一些可以在其他情况下有用。

对抗过度拟合 (Combatting overfitting)

While trying to deal with overfitting I tried several things, none of which produced significant improvements:

在尝试解决过度拟合问题时,我尝试了几件事,但都没有取得明显的进步:

  • increasing the dropout ratio in the network增加网络中的辍学率
  • more data augmentation (random shifts, zooms, skews)更多数据扩充(随机移位,缩放,偏斜)
  • training on more data: using 90/10 split instead of 80/20训练更多数据:使用90/10拆分而不是80/20

平衡数据集 (Balancing the dataset)

The dataset wasn’t very balanced:

数据集不是很平衡:

  • 19% of images were labeled with no traffic light19%的图像被标记为没有交通信号灯
  • 53% red light53%红灯
  • 28% green light.28%的绿灯。

I tried balancing the dataset by oversampling the less common classes but didn’t notice any improvement.

我尝试通过对不太常见的类进行过度采样来平衡数据集,但没有发现任何改进。

昼夜分开 (Separating day & night)

My intuition was that recognizing traffic lights in daylight and nighttime is very different. I thought maybe I could help the model by separating it into two simpler problems.

我的直觉是,在白天和晚上识别交通信号灯是非常不同的。 我认为也许可以通过将模型分为两个简单的问题来为模型提供帮助。

It was fairly easy to separate the images to day and night by looking at their average pixel intensity:

通过查看图像的平均像素强度,可以很容易地将图像分为白天和黑夜:

You can see a very natural separation of images with low average values, i.e. dark images, taken at nighttime, and bright images, taken at daytime.

您会看到平均值很低的图像自然分离,例如,夜间拍摄的深色图像和白天拍摄的明亮图像。

I tried two approaches, both didn’t improve the results:

我尝试了两种方法,但都没有改善结果:

  • Training two separate models for day images and night images为白天图像和夜间图像训练两个单独的模型
  • Training the network to predict 6 classes instead of 3 by also predicting whether it’s day or night通过预测白天还是晚上,训练网络预测6个班级而不是3个班级

使用SqueezeNet的更好变体 (Using better variants of SqueezeNet)

I experimented a little bit with two improved variants of SqueezeNet. The first used residual connections and the second was trained with dense→sparse→dense training (more details in the paper). No luck. ?

我对SqueezeNet的两个改进的变体进行了一些实验。 第一个使用残余连接 ,第二个使用密集→稀疏→密集训练进行训练(本文中有更多详细信息)。 没运气。 ?

交通信号灯的本地化 (Localization of traffic lights)

After reading a great post by deepsense.io on how they won the whale recognition challenge, I tried to train a localizer, i.e. identify the location of the traffic light in the image first, and then identify the traffic light state on a small region of the image.

阅读deepsense.io 发表的精彩文章后 ,他们如何赢得了鲸鱼识别挑战,我尝试训练一个定位器,即首先在图像中识别交通灯的位置,然后在一个较小的区域识别交通灯的状态。图片。

I used sloth to annotate about 2,000 images which took a few hours. When trying to train a model, it was overfitting very quickly, probably because there was not enough labeled data. Perhaps this could work if I had annotated a lot more images.

我用懒惰来注释大约2,000张图像,这花了几个小时。 尝试训练模型时,它拟合得很快,可能是因为标签数据不足。 如果我注释了很多图像,这可能行得通。

在困难情况下训练分类器 (Training a classifier on the hard cases)

I chose 30% of the “harder” images by selecting images which my classifier was less than 97% confident about. I then tried to train classifier just on these images. No improvement. ?

通过选择分类器对自己的信心不足97%的图像,我选择了30%的“较难”图像。 然后,我尝试仅在这些图像上训练分类器。 没提升。 ?

不同的优化算法 (Different optimization algorithm)

I experimented very shortly with using Caffe’s Adam solver instead of SGD with linearly decreasing learning rate but didn’t see any improvement. ?

我很快就使用Caffe的Adam求解器而不是SGD进行了实验,但学习率呈线性下降,但没有任何改善。 ?

在合奏中添加更多模型 (Adding more models to ensemble)

Since the ensemble method proved helpful, I tried to double-down on it. I tried changing different parameters to produce different models and add them to the ensemble: initial seed, dropout rate, different training data (different split), different checkpoint in the training. None of these made any significant improvement. ?

由于事实证明合奏方法很有用,所以我尝试加倍使用。 我尝试更改不同的参数以产生不同的模型并将其添加到集合中:初始种子,辍学率,不同的训练数据(不同的分割),训练中的不同检查点。 这些都没有任何重大改善。 ?

最终分类器详细信息 (Final classifier details)

The classifier uses an ensemble of 3 separately trained networks. A weighted average of the probabilities they give to each class is used as the output. All three networks were using the SqueezeNet network but each one was trained differently.

分类器使用3个单独训练的网络的集合。 他们对每个类别给出的概率的加权平均值用作输出。 这三个网络都使用SqueezeNet网络,但是每个网络都接受了不同的培训。

模型1-具有过采样功能的预训练网络 (Model 1 — Pre-trained network with oversampling)

Trained on the re-labeled training set (after fixing the ground-truth mistakes). The model was fine-tuned based on a pre-trained model of SqueezeNet trained on ImageNet.

在重新标记的训练集上进行了训练(纠正了真实的错误之后)。 基于在ImageNet上训练的SqueezeNet的预训练模型对模型进行了微调。

Data augmentation during training:

训练期间的数据扩充:

  • Random horizontal mirroring随机水平镜像
  • Randomly cropping patches of size 227 x 227 before feeding into the network在馈入网络之前随机裁剪大小为227 x 227的补丁

At test time, the predictions of 10 variations of each image were averaged to calculate the final prediction. The 10 variations were made of:

在测试时,将每个图像的10个变化的预测值平均,以计算最终预测值。 10个变体包括:

  • 5 crops of size 227 x 227: 1 for each corner and 1 in the center of the image5个尺寸为227 x 227的作物:每个角1个,图像中心1个
  • for each crop, a horizontally mirrored version was also used对于每种作物,还使用了水平镜像版本

Model accuracy on validation set: 94.21%Model size: ~2.6 MB

验证集上的模型准确性:94.21%模型大小:〜2.6 MB

模型2-添加旋转不变性 (Model 2 — Adding rotation invariance)

Very similar to Model #1, with the addition of image rotations. During training time, images were randomly rotated by 90°, 180°, 270° or not at all. At test-time, each one of the 10 variations described in Model #1 created three more variations by rotating it by 90°, 180° and 270°. A total of 40 variations were classified by our model and averaged together.

与模型1非常相似,但增加了图像旋转。 在训练期间,图像随机旋转90°,180°,270°或完全不旋转。 在测试时,模型#1中描述的10个变体中的每一个都通过将其旋转90°,180°和270°而创建了另外三个变体。 我们的模型对总共40个变体进行了分类,并取平均值。

Model accuracy on validation set: 94.1%Model size: ~2.6 MB

验证集上的模型准确性:94.1%模型大小:〜2.6 MB

模型3 –从头开始训练 (Model 3 — Trained from scratch)

This model was not fine-tuned, but instead trained from scratch. The rationale behind it was that even though it achieves lower accuracy, it learns different features on the training set than the previous two models, which could be useful when used in an ensemble.

这个模型不是经过微调的,而是从头开始训练的 。 其背后的基本原理是,尽管其准确性较低,但它在训练集上学习的功能与前两个模型相比有所不同,这在集成中使用时可能很有用。

Data augmentation during training and testing are the same as Model #1: mirroring and cropping.

训练和测试期间的数据扩充与模型1相同:镜像和裁剪。

Model accuracy on validation set: 92.92%Model size: ~2.6 MB

验证集上的模型准确性:92.92%模型大小:〜2.6 MB

将模型结合在一起 (Combining the models together)

Each model output three values, representing the probability that the image belongs to each one of the three classes. We averaged their outputs with the following weights:

每个模型输出三个值,代表图像属于三个类别中的每个类别的概率。 我们使用以下权重对它们的输出求平均值:

  • Model #1: 0.28模式1:0.28
  • Model #2: 0.49模式2:0.49
  • Model #3: 0.23模式3:0.23

The values for the weights were found by doing a grid-search over possible values and testing it on the validation set. They are probably a little overfitted to the validation set, but perhaps not too much since this is a very simple operation.

通过对可能的值进行网格搜索并在验证集上进行测试来找到权重的值。 它们可能有点不适合验证集,但可能不会太多,因为这是一个非常简单的操作。

Model accuracy on validation set: 94.83%Model size: ~7.84 MBModel accuracy on Nexar’s test set: 94.955% ?

验证集上的模型准确性:94.83%模型大小:〜7.84 MB Nexar测试集上的模型准确性:94.955%?

模型错误的例子 (Examples of the model mistakes)

The green dot in the palm tree produced by the glare probably made the model predict there’s a green light by mistake.

眩光产生的棕榈树中的绿点可能使模型错误地预测有绿灯。

The model predicted red instead of green. Tricky case when there is more than one traffic light in the scene.

该模型预测红色而不是绿色。 场景中不止一个交通信号灯的棘手情况。

The model said there’s no traffic light while there’s a green traffic light ahead.

该模型表示,没有交通灯,而前方有绿色交通灯。

结论 (Conclusion)

This was the first time I applied deep learning on a real problem! I was happy to see it worked so well. I learned a LOT during the process and will probably write another post that will hopefully help newcomers waste less time on some of the mistakes and technical challenges I had.

这是我第一次将深度学习应用于一个真正的问题! 我很高兴看到它运行良好。 我在此过程中学到了很多,并且可能还会写另一篇文章,希望可以帮助新手减少我在一些错误和技术挑战上的时间。

I want to thank Nexar for providing this great challenge and hope they organize more of these in the future! ?

我要感谢耐克萨斯(Nexar)提出了这一艰巨的挑战,并希望他们将来能组织更多此类挑战! ?

If you enjoyed reading this post, please share it on social media!

如果您喜欢阅读这篇文章,请在社交媒体上分享!

Would love to get your feedback and questions below!

希望在下面获得您的反馈和问题!

翻译自: https://www.freecodecamp.org/news/recognizing-traffic-lights-with-deep-learning-23dae23287cc/

利用深度学习进行交通灯识别

利用深度学习进行交通灯识别_通过深度学习识别交通信号灯相关推荐

  1. 基于51单片机十字路交通灯仿真_黄灯闪烁_正常模式+夜间模式+紧急模式

    基于51单片机十字路交通灯仿真_黄灯闪烁+夜间+夜间 目录 基于51单片机十字路交通灯仿真_黄灯闪烁+夜间+夜间 演示视频 基本功能: 仿真图 程序 程序代码 程序讲解 倒计时的产生 红黄绿灯状态处理 ...

  2. led交通灯c语言程序设计,单片机控制的交通灯C语言编程.doc

    单片机控制的交通灯C语言编程 单片机控制的交通灯 红灯停,绿灯行,黄灯闪烁提示行人红绿灯即将切换.四个方向各有一个红.黄.绿显示和两个数码管. 东西道为人行道(20秒),南北道为车行道(60秒),黄灯 ...

  3. 深度学习 场景识别_使用深度学习进行自然场景识别

    深度学习 场景识别 Recognizing the environment in one glance is one of the human brain's most accomplished de ...

  4. 深度学习将灰度图着色_通过深度学习为视频着色

    深度学习将灰度图着色 零本地设置/ DeOldify / Colab笔记本 (Zero Local Setup / DeOldify / Colab Notebook) "Haal Kais ...

  5. 微型计算机技术 论文,微型计算机技术课程设计论文报告微机交通灯控制系统_毕业论文.docx...

    * * *计算机科学系 课程设计(综合实验)报告 (2014--2015 年度第一学期) 课程名称:微型计算机技术 题 目:微机交通灯控制系统 班 级: 学 号: 学生姓名: 指导教师: 设计周数: ...

  6. 现代交通灯的设计C语言编程,交通灯设计_优秀论文.doc

    . PAGE .. 毕业论文(设计) 题 目 基于单片机控制的 交通灯的设计 _ 学生姓名 学 号 专业班级 指导教师 完成时间: 201 年 月 日 . PAGE .. 摘 要 交通信号灯的出现,使 ...

  7. 基于几何学习图像的三维重建发展_基于深度学习的三维重建算法:MVSNet、RMVSNet、PointMVSNet、Cascade系列...

    欢迎关注微信公众号"3D视觉学习笔记",分享博士期间3D视觉学习收获 MVSNet:香港科技大学的权龙教授团队的MVSNet(2018年ECCV)开启了用深度做多视图三维重建的先河 ...

  8. 计算机控制系统课程设计交通灯,太原理工大学数字逻辑课设(交通灯)

    <太原理工大学数字逻辑课设(交通灯)>由会员分享,可在线阅读,更多相关<太原理工大学数字逻辑课设(交通灯)(25页珍藏版)>请在人人文库网上搜索. 1.太原理工大学计算机科学与 ...

  9. 用python做一个车牌识别_如何用 Python 识别车牌

    车牌识别在高速公路中有着广泛的应用,比如我们常见的电子收费(ETC)系统和交通违章车辆的检测,除此之外像小区或地下车库门禁也会用到,基本上凡是需要对车辆进行身份检测的地方都会用到. 简介 车牌识别系统 ...

最新文章

  1. 变分贝叶斯variable bayes 和EM算法关系
  2. linux代码中能出现中文吗_Linux命令很熟悉,你知道它们的英文全称和中文解释吗?...
  3. 首次公开!菜鸟弹性调度系统的架构设计
  4. Panasonic Programming Contest (AtCoder Beginner Contest 195) 题解
  5. 使用PostgREST的RestAPI操作之角色系统教程
  6. 小牛485通讯原理_plc和变频器通讯接线图详解
  7. linux下msmtp+mutt+shell发送邮件
  8. ERROR 1290 (HY000): The MySQL server is running withnbs
  9. swagger 接口参数顺序_Swagger常用参数用法
  10. xpraid安装_XP系统怎么安装raid驱动|XP系统安装raid驱动的方法
  11. matlab画进化树分析图,系统发育(进化)树绘制小结
  12. swift5函数和Collection
  13. 单层感知器的学习规则
  14. outlook 签名_快速提示:轻松在Outlook 2007中的签名之间切换
  15. linux系统安装s3fs,利用s3fs 将 s3 bucket 挂载到Linux目录
  16. PHP读取Excel文件(PHPExcel)
  17. 我要找什么样的女朋友?
  18. 2.基于holychip(HC89F30xC系列)的使用
  19. el+vue实战 ② 在el-table中的每一行加上头像/图片;去掉div标签自动换行问题;el-table表格中实现字数限制,只显示一行
  20. 09:判断能否被3,5,7整除

热门文章

  1. OpenCV3编程入门(一)
  2. 6款自媒体人必备工具,视频一键分发,去水印等,纯干货!
  3. VirtualBox至强
  4. 神奇的运放--你都了解了吗?
  5. The Cluster ID xxx doesn't match stored clusterId Some(xxx) in meta.properties. The broker is trying
  6. day39 python 学习 数据库学习 五个约束,数据库设计(一对一,一对多等等)
  7. idea中代码不提示报错
  8. 2021年危险化学品经营单位安全管理人员报名考试及危险化学品经营单位安全管理人员模拟考试题
  9. 用ArcGIS API for JavaScript制作三维可视化图
  10. mysql可以存文档_MySQL 文档存储介绍