识别手写字体app

从构建卷积神经网络到将OCR部署到iOS (From constructing a Convolutional Neural Network to deploying an OCR to iOS)

项目动机✍️?? (The Motivation for the Project ✍️ ??)

While I was learning how to create deep learning models for the MNIST dataset a few months ago, I ended up making an iOS app that recognized handwritten characters.

几个月前,当我学习如何为MNIST数据集创建深度学习模型时,我最终制作了一个可识别手写字符的iOS应用。

My friend Kaichi Momose was developing a Japanese language learning app, Nukon. He coincidentally wanted to have a similar feature in it. We then collaborated to build something more sophisticated than a digit recognizer: an OCR (Optical Character Recognition/Reader) for Japanese characters (Hiragana and Katakana).

我的朋友Kaichi Momose正在开发日语学习应用程序Nukon 。 他碰巧希望在其中具有类似的功能。 然后,我们进行了协作,以构建比数字识别器更复杂的东西:用于日语字符( 平假名和片假名 )的OCR(光学字符识别/阅读器 )。

During the development of Nukon, there was no API available for handwriting recognition in Japanese. We had no choice but to build our own OCR. The biggest benefit we got from building one from scratch was that ours works offline. Users can be deep in the mountains without the internet and still open up Nukon to maintain their daily routine of learning Japanese. We learned a lot throughout the process, but more importantly, we were thrilled to ship a better product for our users.

在开发Nukon的过程中,日语没有用于手写识别的API。 我们别无选择,只能构建自己的OCR。 从头开始构建一个最大的好处就是我们的离线工作。 用户可以在没有互联网的情况下深入山区,但仍然可以打开Nukon来维持日常学习日语的习惯。 在整个过程中,我们学到了很多东西,但更重要的是,我们很高兴为用户提供更好的产品。

This article will break down the process of how we built a Japanese OCR for iOS apps. For those who would like to build one for other languages/symbols, feel free to customize it by changing the dataset.

本文将详细介绍我们如何为iOS应用构建日语OCR的过程。 对于那些想要为其他语言/符号构建代码的人,请随时通过更改数据集对其进行自定义。

Without further ado, let’s take a look at what will be covered:

事不宜迟,让我们看一下其中的内容:

Part 1️⃣: Obtain the dataset and preprocess imagesPart 2️⃣: Build & train the CNN (Convolutional Neural Network)Part 3️⃣: Integrate the trained model into iOS

第1部分:⃣:获取数据集和预处理图像 第2部分:⃣:建立和训练CNN(卷积神经网络) 第3部分:⃣:将训练后的模型集成到iOS中

获取数据集和预处理图像? (Obtain the dataset & Preprocess Images ?)

The dataset comes from the ETL Character Database, which contains nine sets of images of handwritten characters and symbols. Since we are going to build an OCR for Hiragana, ETL8 is the dataset we will use.

该数据集来自ETL字符数据库 ,该数据库包含九组手写字符和符号的图像。 由于我们将为平假名构建OCR, 因此我们将使用ETL8数据集。

To get the images from the database, we need some helper functions that read and store images in .npz format.

为了从数据库中获取图像,我们需要一些帮助程序功能,以.npz格式读取和存储图像。

import struct
import numpy as np
from PIL import Imagesz_record = 8199def read_record_ETL8G(f):s = f.read(sz_record)r = struct.unpack('>2H8sI4B4H2B30x8128s11x', s)iF = Image.frombytes('F', (128, 127), r[14], 'bit', 4)iL = iF.convert('L')return r + (iL,)def read_hiragana():# Type of characters = 70, person = 160, y = 127, x = 128ary = np.zeros([71, 160, 127, 128], dtype=np.uint8)for j in range(1, 33):filename = '../../ETL8G/ETL8G_{:02d}'.format(j)with open(filename, 'rb') as f:for id_dataset in range(5):moji = 0for i in range(956):r = read_record_ETL8G(f)if b'.HIRA' in r[2] or b'.WO.' in r[2]:if not b'KAI' in r[2] and not b'HEI' in r[2]:ary[moji, (j - 1) * 5 + id_dataset] = np.array(r[-1])moji += 1np.savez_compressed("hiragana.npz", ary)

Once we have hiragana.npz saved, let’s start processing images by loading the file and reshaping the image dimensions to 32x32 pixels. We will also add data augmentation to generate extra images that are rotated and zoomed. When our model is trained on character images from a variety of angles, our model can better adapt to people’s handwriting.

保存hiragana.npz ,让我们开始处理图像,方法是加载文件并将图像尺寸重新设置为32x32 pixel 。 我们还将添加数据增强以生成旋转和缩放的额外图像。 当我们的模型从不同角度训练角色图像时,我们的模型可以更好地适应人们的笔迹。

import scipy.misc
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.models import Sequential
from keras.preprocessing.image import ImageDataGenerator
from keras.utils import np_utils
from sklearn.model_selection import train_test_split# 71 characters
nb_classes = 71
# input image dimensions
img_rows, img_cols = 32, 32ary = np.load("hiragana.npz")['arr_0'].reshape([-1, 127, 128]).astype(np.float32) / 15
X_train = np.zeros([nb_classes * 160, img_rows, img_cols], dtype=np.float32)
for i in range(nb_classes * 160):X_train[i] = scipy.misc.imresize(ary[i], (img_rows, img_cols), mode='F')y_train = np.repeat(np.arange(nb_classes), 160)X_train, X_test, y_train, y_test = train_test_split(X_train, y_train, test_size=0.2)# convert class vectors to categorical matrices
y_train = np_utils.to_categorical(y_train, nb_classes)
y_test = np_utils.to_categorical(y_test, nb_classes)# data augmentation
datagen = ImageDataGenerator(rotation_range=15, zoom_range=0.20)
datagen.fit(X_train)

建立和训练CNN吗? (Build and Train the CNN ?️)

Now comes in the fun part! We will use Keras to construct a CNN (Convolutional Neural Network) for our model. When I first built the model, I experimented with hyper-parameters and tuned them multiple times. The combination below gave me the highest accuracy — 98.77%. Feel free to play around with different parameters yourself.

现在进入有趣的部分! 我们将使用Keras为模型构建CNN(卷积神经网络)。 最初构建模型时,我尝试了超参数并对其进行了多次调整。 下面的组合为我提供了最高的准确性-98.77%。 可以自己随意使用不同的参数。

model = Sequential()def model_6_layers():model.add(Conv2D(32, 3, 3, input_shape=input_shape))model.add(Activation('relu'))model.add(Conv2D(32, 3, 3))model.add(Activation('relu'))model.add(MaxPooling2D(pool_size=(2, 2)))model.add(Dropout(0.5))model.add(Conv2D(64, 3, 3))model.add(Activation('relu'))model.add(Conv2D(64, 3, 3))model.add(Activation('relu'))model.add(MaxPooling2D(pool_size=(2, 2)))model.add(Dropout(0.5))model.add(Flatten())model.add(Dense(256))model.add(Activation('relu'))model.add(Dropout(0.5))model.add(Dense(nb_classes))model.add(Activation('softmax'))model_6_layers()model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit_generator(datagen.flow(X_train, y_train, batch_size=16), samples_per_epoch=X_train.shape[0],nb_epoch=30, validation_data=(X_test, y_test))

Here are some tips if you find the performance of the model unsatisfactory in the training step:

如果在训练步骤中发现模型的性能不理想 ,则有一些技巧:

模型过拟合 (Model is overfitting)

This means that the model is not well generalized. Check out this article for intuitive explanations.

这意味着该模型没有很好地概括。 查看本文以获得直观的解释。

How to detect overfitting: acc (accuracy) continues to go up, but the val_acc (validation accuracy) does the opposite in the training process.

如何检测过度拟合acc (准确性)继续提高,但是val_acc (验证准确性)在训练过程中却相反。

Some solutions to overfitting: regularization (ex. dropouts), data augmentation, improvement on quality of the dataset

过度拟合的一些解决方案 :正则化(例如,辍学),数据扩充,数据集质量的提高

如何知道模型是否在“学习”中 (How to know whether the model is “learning”)

The model is not learning if val_loss (validation loss) goes up or does not decrease as the training goes on.

随着训练的进行,模型val_loss (验证损失)增加还是没有减少,该模型无法学习。

Use TensorBoard — it provides visualizations for model performance over time. It gets rid of the tiresome task of looking at every single epoch and comparing values constantly.

使用TensorBoard-它随时间提供可视化的模型性能。 它摆脱了查看每个纪元并不断比较值的繁琐任务。

As we are satisfied with our accuracy, we remove dropout layers before saving the weights and model configuration as a file.

由于我们对我们的准确性感到满意,因此在将权重和模型配置保存为文件之前,请先删除掉落图层。

for k in model.layers:if type(k) is keras.layers.Dropout:model.layers.remove(k)model.save('hiraganaModel.h5')

The only task left before moving on to the iOS part is converting hiraganaModel.h5 to a CoreML model.

在转移到iOS的一部分之前剩下的唯一任务转换hiraganaModel.h5到CoreML模型。

import coremltoolsoutput_labels = [
'あ', 'い', 'う', 'え', 'お',
'か', 'く', 'こ', 'し', 'せ',
'た', 'つ', 'と', 'に', 'ね',
'は', 'ふ', 'ほ', 'み', 'め',
'や', 'ゆ', 'よ', 'ら', 'り',
'る', 'わ', 'が', 'げ', 'じ',
'ぞ', 'だ', 'ぢ', 'づ', 'で',
'ど', 'ば', 'び',
'ぶ', 'べ', 'ぼ', 'ぱ', 'ぴ',
'ぷ', 'ぺ', 'ぽ',
'き', 'け', 'さ', 'す', 'そ',
'ち', 'て', 'な', 'ぬ', 'の',
'ひ', 'へ', 'ま', 'む', 'も',
'れ', 'を', 'ぎ', 'ご', 'ず',
'ぜ', 'ん', 'ぐ', 'ざ', 'ろ']scale = 1/255.coreml_model = coremltools.converters.keras.convert('./hiraganaModel.h5',input_names='image',image_input_names='image',output_names='output',class_labels= output_labels,image_scale=scale)
coreml_model.author = 'Your Name'
coreml_model.license = 'MIT'
coreml_model.short_description = 'Detect hiragana character from handwriting'
coreml_model.input_description['image'] = 'Grayscale image containing a handwritten character'
coreml_model.output_description['output'] = 'Output a character in hiragana'
coreml_model.save('hiraganaModel.mlmodel')

The output_labels are all possible outputs we will see in iOS later.

output_labels是我们稍后将在iOS中看到的所有可能的输出。

Fun fact: if you understand Japanese, you may know that the order of the output characters does not match with the “alphabetical order” of Hiragana. It took us some time to realize that images in ETL8 weren’t in “alphabetical order” (thanks to Kaichi for realizing this). The dataset was compiled by a Japanese university, though…?

有趣的事实:如果您懂日语,您可能会知道输出字符的顺序与平假名的“字母顺序”不匹配。 我们花了一些时间才意识到ETL8中的图像不是按字母顺序排列的(感谢Kaichi意识到了这一点)。 数据集是由日本大学编译的,但是……?

是否将训练有素的模型集成到iOS中? (Integrate the Trained Model Into iOS ?)

We are finally putting everything together! Drag and drop hiraganaModel.mlmodel into an Xcode project. Then you will see something like this:

我们终于把所有东西都放在一起了! 拖放hiraganaModel.mlmodel到Xcode项目。 然后,您将看到类似以下内容:

Note: Xcode will create a workspace upon copying the model. We need to switch our coding environment to the workspace otherwise the ML model won’t work!

注意 :Xcode将在复制模型时创建一个工作区。 我们需要将编码环境切换到工作空间,否则ML模型将无法工作!

The end goal is having our Hiragana model predict a character by passing in an image. To achieve this, we will create a simple UI so the user can write, and we will store the user’s writing in an image format. Lastly, we retrieve the pixel values of the image and feed them to our model.

最终目标是让我们的平假名模型通过传递图像来预测角色。 为此,我们将创建一个简单的UI,以便用户可以书写,并以图像格式存储用户的书写。 最后,我们检索图像的像素值并将其输入到我们的模型中。

Let’s do it step by step:

让我们逐步进行:

  1. “Draw” characters on UIView with UIBezierPath

    使用UIBezierPathUIView上“绘制”字符

import UIKitclass viewController: UIViewController {@IBOutlet weak var canvas: UIView!var path = UIBezierPath()var startPoint = CGPoint()var touchPoint = CGPoint()override func viewDidLoad() {super.viewDidLoad()canvas.clipsToBounds = truecanvas.isMultipleTouchEnabled = true}override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {let touch = touches.firstif let point = touch?.location(in: canvas) {startPoint = point}}override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {let touch = touches.firstif let point = touch?.location(in: canvas) {touchPoint = point}path.move(to: startPoint)path.addLine(to: touchPoint)startPoint = touchPointdraw()}func draw() {let strokeLayer = CAShapeLayer()strokeLayer.fillColor = nilstrokeLayer.lineWidth = 8strokeLayer.strokeColor = UIColor.orange.cgColorstrokeLayer.path = path.cgPathcanvas.layer.addSublayer(strokeLayer)}// clear the drawing in view@IBAction func clearPressed(_ sender: UIButton) {path.removeAllPoints()canvas.layer.sublayers = nilcanvas.setNeedsDisplay()}
}

The strokeLayer.strokeColor can be any color. However, the background color of canvas must be black. Although our training images have a white background and black strokes, the ML model does not react well to an input image with this style.

strokeLayer.strokeColor可以是任何颜色。 但是, canvas的背景色必须为黑色 。 尽管我们的训练图像具有白色背景和黑色笔画,但是ML模型对于这种风格的输入图像React不佳。

2. Turn UIView into UIImage and retrieve pixel values with CVPixelBuffer

2.将UIView转换为UIImage并使用CVPixelBuffer检索像素值

In the extension, there are two helper functions. Together, they translate images into a pixel buffer, which is equivalent to pixel values. The input width and height should both be 32 since the input dimensions of our model are 32 by 32 pixels.

在扩展中,有两个帮助程序功能。 它们一起将图像转换为像素缓冲区,相当于像素值。 输入的widthheight都应为32,因为我们模型的输入尺寸为32 x 32像素。

As soon as we have the pixelBuffer, we can call model.prediction() and pass in pixelBuffer. And there we go! We can have an output of classLabel!

一旦有了pixelBuffer ,就可以调用model.prediction()并传入pixelBuffer 。 然后我们去了! 我们可以得到classLabel的输出!

@IBAction func recognizePressed(_ sender: UIButton) {// Turn view into an imagelet resultImage = UIImage.init(view: canvas)let pixelBuffer = resultImage.pixelBufferGray(width: 32, height: 32)let model = hiraganaModel3()// output a Hiragana characterlet output = try? model.prediction(image: pixelBuffer!)print(output?.classLabel)
}extension UIImage {// Resizes the image to width x height and converts it to a grayscale CVPixelBufferfunc pixelBufferGray(width: Int, height: Int) -> CVPixelBuffer? {return _pixelBuffer(width: width, height: height,pixelFormatType: kCVPixelFormatType_OneComponent8,colorSpace: CGColorSpaceCreateDeviceGray(),alphaInfo: .none)}func _pixelBuffer(width: Int, height: Int, pixelFormatType: OSType,colorSpace: CGColorSpace, alphaInfo: CGImageAlphaInfo) -> CVPixelBuffer? {var maybePixelBuffer: CVPixelBuffer?let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue]let status = CVPixelBufferCreate(kCFAllocatorDefault,width,height,pixelFormatType,attrs as CFDictionary,&maybePixelBuffer)guard status == kCVReturnSuccess, let pixelBuffer = maybePixelBuffer else {return nil}CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer)guard let context = CGContext(data: pixelData,width: width,height: height,bitsPerComponent: 8,bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer),space: colorSpace,bitmapInfo: alphaInfo.rawValue)else {return nil}UIGraphicsPushContext(context)context.translateBy(x: 0, y: CGFloat(height))context.scaleBy(x: 1, y: -1)self.draw(in: CGRect(x: 0, y: 0, width: width, height: height))UIGraphicsPopContext()CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))return pixelBuffer}
}

3. Show the output with UIAlertController

3.使用UIAlertController显示输出

This step is totally optional. As shown in the GIF at the beginning , I added an alert controller to inform the result.

此步骤是完全可选的。 如开头的GIF所示,我添加了一个警报控制器来通知结果。

func informResultPopUp(message: String) {let alertController = UIAlertController(title: message, message: nil, preferredStyle: .alert)let ok = UIAlertAction(title: "Ok", style: .default, handler: { action inself.dismiss(animated: true, completion: nil)})alertController.addAction(ok)self.present(alertController, animated: true) { () in}
}

Voila! We just built an OCR that is demo-ready (and App-Store-ready)! ??

瞧! 我们刚刚构建了一个演示就绪(和应用商店就绪)的OCR! ??

结论 (Conclusion ?)

Building an OCR is not all that hard. As you saw, this article consists of steps and problems and I ran into while building this project. I enjoyed the process of making a bunch of Python code demonstrable by connecting it with iOS, and I intend to continue doing so.

建立OCR并不那么困难。 如您所见,本文包含步骤和问题,在构建此项目时我遇到了问题。 我喜欢通过与iOS连接使一堆Python代码可演示的过程,并且我打算继续这样做。

I hope this article provides some useful information to those who want to build an OCR but have no clue where to start.

我希望本文为那些想构建OCR但不知道从何开始的人提供一些有用的信息。

You can find the source code here.

你可以找到源代码 在这里

Bonus: if you are interested in experimenting with shallow algorithms, then keep on reading!

奖励 :如果您对尝试浅层算法感兴趣,请继续阅读!

[可选]浅算法训练? ([Optional] Train With Shallow Algorithms ?)

Before implementing CNN, Kaichi and I tested out other machine learning algorithms to figure out if they could get the job done (and save us some computing costs!). We picked KNN and Random Forest.

在实施CNN之前,我和Kaichi测试了其他机器学习算法,以确定它们是否可以完成工作(并为我们节省了一些计算成本!)。 我们选择了KNN和随机森林。

To evaluate their performances, we defined our baseline accuracy to be 1/71 = 0.014.

为了评估其性能,我们将基准精度定义为1/71 = 0.014。

We assumed a person without any knowledge of the Japanese language could have a 1.4% chance of guessing a character right.

我们假设一个不懂日语的人有1.4%的机会猜对一个字符是正确的。

Thus, the model would be doing well if its accuracy could surpass 1.4%. Let’s see if it was the case. ?

因此,如果模型的准确性可以超过1.4%,则该模型将表现良好。 让我们看看是否是这种情况。 ?

知识网络 (KNN)

The final accuracy we got was 54.84%. Much higher than 1.4% already!

我们获得的最终准确度是54.84%。 已经远远高于1.4%!

随机森林 (Random Forest)

An accuracy of 79.23%, so Random Forest exceeded our expectations. While tuning hyper-parameters, we got better results by increasing the number of estimators and depth of trees. We thought that having more trees (estimators) in the forest meant more features in the image were learned. Also, the deeper the tree, the more details it learned from features.

准确度为79.23%,因此Random Forest超出了我们的预期。 在调整超参数时,通过增加估计器的数量和树的深度,我们可以获得更好的结果。 我们认为森林中有更多树木(估计量)意味着可以学习图像中的更多特征。 而且,树越深,从功能中学到的细节就越多。

If you are interested in learning more, I found this paper that discusses image classification with Random Forest.

如果您有兴趣了解更多信息,我发现本文讨论了随机森林的图像分类。

Thank you for reading. Any thoughts and feedback are welcomed!

感谢您的阅读。 欢迎任何想法和反馈!

翻译自: https://www.freecodecamp.org/news/build-a-handwriting-recognizer-ship-it-to-app-store-fcce24205b4b/

识别手写字体app

识别手写字体app_我如何构建手写识别器并将其运送到App Store相关推荐

  1. Laya 构建Xcode工程对接穿山甲SDK并打包提交App Store流程

    Laya 构建Xcode工程对接穿山甲SDK并打包提交App Store流程 laya构建Xcode工程 发布web项目 Laya部分Xcode构建 对接穿山甲SDK流程 laya所在文件夹resou ...

  2. python手写字体程序_深度学习---手写字体识别程序分析(python)

    我想大部分程序员的第一个程序应该都是"hello world",在深度学习领域,这个"hello world"程序就是手写字体识别程序. 这次我们详细的分析下手 ...

  3. 友盟 点完登陆后无反应_《英雄联盟》手游已上架App Store!附下载、登陆简单教程...

    英雄联盟手游 iOS 版开始公测了,首批公测的地区有印度尼西亚.日本.马来西亚.菲律宾.新加坡.泰国和韩国,国内玩家还得继续等一段时间. iOS 用户现在可以去已经开启公测的地区/国家 App Sto ...

  4. 人工智能入门第一课:手写字体识别及可视化项目(手写画板)(mnist)

    人工智能入门第一课:手写字体识别及可视化项目(手写画板)(mnist),使用技术(Django+js+tensorflow+html+bootstrap+inspinia框架) 直接上图,项目效果 1 ...

  5. pytorch应用于MNIST手写字体识别

    前言 手写字体MNIST数据集是一组常见的图像,其常用于测评和比较机器学习算法的性能,本文使用pytorch框架来实现对该数据集的识别,并对结果进行逐步的优化. 一.数据集 MNIST数据集是由28x ...

  6. python识别手写数字字体_基于tensorflow框架对手写字体MNIST数据集的识别

    本文我们利用python语言,通过tensorflow框架对手写字体MNIST数据库进行识别. 学习每一门语言都有一个"Hello World"程序,而对数字手写体数据库MNIST ...

  7. pytorch rnn 实现手写字体识别

    pytorch rnn 实现手写字体识别 构建 RNN 代码 加载数据 使用RNN 训练 和测试数据 构建 RNN 代码 import torch import torch.nn as nn from ...

  8. 使用mnist数据集实现手写字体的识别

    1.MNIST是一个入门级的计算机视觉数据集,它包含各种手写数字图片: 它也包含每一张图片对应的标签,告诉我们这个是数字几,该数据集包括60000行的训练数据集(mnist.train )和10000 ...

  9. 深度学习,实现手写字体识别(大数据人工智能公司)

    手写字体识别是指给定一系列的手写字体图片以及对应的标签,构建模型进行学习,目标是对于一张新的手写字体图片能够自动识别出对应的文字或数字.通过深度学习构建普通神经网络和卷积神经网络,处理手写字体数据.通 ...

最新文章

  1. Windows Server 2008R2使用web方式修改域账户密码
  2. ProxylessNAS pytorch
  3. python读取csv文件并修改指定内容-pandas读取CSV文件时查看修改各列的数据类型格式...
  4. 【温故知新】HTML学习笔记(上)
  5. Spring Boot 2.X 使用@Cacheable时注意事项
  6. 将您重定向的次数过多什么意思_电池循环次数是什么意思?怎么计算的?
  7. LOJ - #117. 有源汇有上下界最小流(有源汇有上下界的最小流)
  8. 机器学习:正则化原理总结
  9. 《版式设计——日本平面设计师参考手册》—第1章应用对象样式
  10. django框架预备知识
  11. Sass基础知识及语法
  12. 重置mysql+密码_MySQL重置root密码的几种方法(windows+Linux)
  13. Struts与Ajax页面交互
  14. 一对一关联查询注解@OneToOne的实例详解
  15. 简单免费内网穿透教程,外网快速访问内网群晖/nas/树莓派
  16. (转) 很牛的求职经历
  17. 度数换算_度数换算计算器
  18. 服务器网站常用端口号,web服务器常用端口号
  19. numpy的loadtxt导入文件时,怎样调过第一行标题
  20. 怎么用计算机弹囚鸟,哪里有《囚鸟》电脑键盘钢琴谱?

热门文章

  1. 多线程1(进程、[创建]线程与生命周期)
  2. mysqlselectdb php_PHP MySQL Select(数据库查询)
  3. HTML POST提交参数给PHP并返回json,上传execl文件
  4. iOS 13 适配TextField 崩溃问题
  5. 前端开发学习Day27
  6. 洛谷 P1816 忠诚
  7. 如何删除mac通用二进制文件
  8. IOS7原生API进行二维码条形码的扫描
  9. CSS3 @keyframes animate
  10. 解决 apache 2.4.1 无法解析shtml中的expr指令问题