承接上一篇的内容,考虑如何将深度学习的模型加载到android app中

文章目录

  • 前言
  • 一、使用工具
  • 二、使用步骤
    • 1.模型格式的转换
    • 2.配置文件修改
    • 3. 应用程序

前言

将图片学习的模型加载到手机应用中,让学习模型发挥实际用处。这里以SqueezeNet模型(典型轻量化模型)为参考,记录如何将轻量化模型应用到手机app上。

一、使用工具

采用pycharm和android studio分别作为python和android开发的IDE。

二、使用步骤

1.模型格式的转换

Keras可以作为初学者入手深度学习模型不错的模型库,其语言简单且易于理解。Keras一般将训练好的模型和参数以h5格式保存,这里需要转换成tflite格式,以供android调用。

from keras_squeezenet import SqueezeNet
model = SqueezeNet(weights=None)
model.load_weights("./squeezenet_weights_tf_dim_ordering_tf_kernels.h5") ###从网上下载SequeezeNet网络模型及参数converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open("./squeezenet_model.tflite", "wb").write(tflite_model)

这里生成的tflite文件能够根据输入图片产生预测结果的概率值。

将训练的模型加载到android上不仅需要tflite模型,还需要label文件。

在github中下载的模型相配套的label文件。

时候不好下载,可以从此处下载:
分类标签文件

接下来通过Android Studio建立项目,该过程这里不详细描述。

2.配置文件修改

在新建的项目中,在build.gradle中增加

android {......aaptOptions {noCompress "tflite"noCompress "lite"
}
}dependencies {.......
implementation 'org.tensorflow:tensorflow-lite:2.2.0'
}

大家可以根据自己的要求写界面文件,这里附上写的界面文件(AndroidManifest.xml):

<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"package="gdut.bsx.tensorflowtraining"><uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/><uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/><uses-permission android:name="android.permission.CAMERA"/><applicationandroid:allowBackup="true"android:icon="@mipmap/ic_launcher"android:label="@string/app_name"android:roundIcon="@mipmap/ic_launcher_round"android:supportsRtl="true"android:screenOrientation="portrait"android:theme="@style/AppTheme"><activity android:name=".MainActivity" android:screenOrientation="portrait"android:launchMode="singleTask"><intent-filter><action android:name="android.intent.action.MAIN"/><category android:name="android.intent.category.LAUNCHER"/></intent-filter></activity><providerandroid:name="android.support.v4.content.FileProvider"android:authorities="gdut.bsx.tensorflowtraining.fileprovider"android:exported="false"android:grantUriPermissions="true"><meta-dataandroid:name="android.support.FILE_PROVIDER_PATHS"android:resource="@xml/file_paths" /></provider></application></manifest>

将生成的模型文件 sequeezenet.tflite和Class_Label.txt 放入文件夹assets中。
注意:一定要用分类标签文件。

3. 应用程序

这里我们实现的代码主要功能包括摄像头拍照,图片管理、图像分类。
包含代码包括:
ImageClassifier.java

package gdut.bsx.tensorflowtraining;import android.content.Context;
import android.content.res.AssetFileDescriptor;
import android.graphics.Bitmap;
import android.os.SystemClock;
import android.util.Log;import org.tensorflow.lite.Interpreter;import java.io.BufferedReader;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStreamReader;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.channels.FileChannel;
import java.util.AbstractMap;
import java.util.ArrayList;
import java.util.Comparator;
import java.util.List;
import java.util.Map;
import java.util.PriorityQueue;public class ImageClassifier {private static volatile ImageClassifier INSTANCE;/*** Tag for the {@link Log}.*/private static final String TAG = "TfLiteCameraDemo";/*** Name of the model file stored in Assets.*/private static final String MODEL_PATH = "squeezenet.tflite";//private static final String MODEL_PATH = "mobilenet_v1_0.25_128_quant.tflite";/*** Name of the label file stored in Assets.*/private static final String LABEL_PATH = "Class_Label.txt";/** Number of results to show in the UI. */private static final int RESULTS_TO_SHOW = 3;/** Dimensions of inputs. */private static final int DIM_BATCH_SIZE = 1;private static final int DIM_PIXEL_SIZE = 3;static final int DIM_IMG_SIZE_X = 224;static final int DIM_IMG_SIZE_Y = 224;private static final int IMAGE_MEAN = 128;private static final float IMAGE_STD = 128.0f;/* Preallocated buffers for storing image data in. */private int[] intValues = new int[DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y];/*** An instance of the driver class to run model inference with Tensorflow Lite.*/private Interpreter tflite;/*** Labels corresponding to the output of the vision model.*/private List<String> labelList;/*** A ByteBuffer to hold image data, to be feed into Tensorflow Lite as inputs.*/private ByteBuffer imgData ;/*** An array to hold inference results, to be feed into Tensorflow Lite as outputs.*/private float[][] labelProbArray ;/*** multi-stage low pass filter**/private float[][] filterLabelProbArray = null;private static final int FILTER_STAGES = 3;private static final float FILTER_FACTOR = 0.4f;/*** Initializes an {@code ImageClassifier}.*/private PriorityQueue<Map.Entry<String, Float>> sortedLabels =new PriorityQueue<>(RESULTS_TO_SHOW,new Comparator<Map.Entry<String, Float>>() {@Overridepublic int compare(Map.Entry<String, Float> o1, Map.Entry<String, Float> o2) {return (o1.getValue()).compareTo(o2.getValue());}});private ImageClassifier(Context activity) throws IOException {tflite = new Interpreter(loadModelFile(activity));labelList = loadLabelList(activity);imgData = ByteBuffer.allocateDirect(4 * DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE);imgData.order(ByteOrder.nativeOrder());labelProbArray = new float[1][labelList.size()];filterLabelProbArray = new float[FILTER_STAGES][labelList.size()];Log.d(TAG, "Created a Tensorflow Lite Image Classifier.");}public static ImageClassifier getInstance(Context context) {if (INSTANCE == null) {try {INSTANCE = new ImageClassifier(context);} catch (IOException e) {e.printStackTrace();}}return INSTANCE;}/*** Classifies a frame from the preview stream.*/public String classifyFrame(Bitmap bitmap) {if (tflite == null) {Log.e(TAG, "Image classifier has not been initialized; Skipped.");return "Uninitialized Classifier.";}convertBitmapToByteBuffer(bitmap);long startTime = SystemClock.uptimeMillis();tflite.run(imgData, labelProbArray);long endTime = SystemClock.uptimeMillis();Log.d(TAG, "Timecost to run model inference: " + Long.toString(endTime - startTime));// Smooth the results across frames.//applyFilter();return printTopKLabels();}/*** Smooth the results across frames.*//*void applyFilter() {int numLabels = labelList.size();// Low pass filter `labelProbArray` into the first stage of the filter.for (int j = 0; j < numLabels; ++j) {filterLabelProbArray[0][j] +=FILTER_FACTOR * (labelProbArray[0][j] - filterLabelProbArray[0][j]);}// Low pass filter each stage into the next.for (int i = 1; i < FILTER_STAGES; ++i) {for (int j = 0; j < numLabels; ++j) {filterLabelProbArray[i][j] +=FILTER_FACTOR * (filterLabelProbArray[i - 1][j] - filterLabelProbArray[i][j]);}}// Copy the last stage filter output back to `labelProbArray`.for (int j = 0; j < numLabels; ++j) {labelProbArray[0][j] = filterLabelProbArray[FILTER_STAGES - 1][j];}}*//** Closes tflite to release resources. */public void close() {tflite.close();tflite = null;}/*** Memory-map the model file in Assets.*/private ByteBuffer loadModelFile(Context activity) throws IOException {AssetFileDescriptor fileDescriptor = activity.getAssets().openFd(MODEL_PATH);FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor());FileChannel fileChannel = inputStream.getChannel();long startOffset = fileDescriptor.getStartOffset();long declaredLength = fileDescriptor.getDeclaredLength();return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength).asReadOnlyBuffer();}/*** Reads label list from Assets.*/private List<String> loadLabelList(Context activity) throws IOException {List<String> labelList = new ArrayList<String>();BufferedReader reader =new BufferedReader(new InputStreamReader(activity.getAssets().open(LABEL_PATH)));String line;while ((line = reader.readLine()) != null) {labelList.add(line);}reader.close();return labelList;}/*** Get Label.txt.*/public List<String> getLabelList() throws IOException {return labelList;}/*** Writes Image data into a {@code ByteBuffer}.*/private void convertBitmapToByteBuffer(Bitmap bitmap) {if (imgData == null) {return;}imgData.rewind();bitmap.getPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());// Convert the image to floating point.int pixel = 0;long startTime = SystemClock.uptimeMillis();for (int i = 0; i < DIM_IMG_SIZE_X; ++i) {for (int j = 0; j < DIM_IMG_SIZE_Y; ++j) {final int val = intValues[pixel++];imgData.putFloat((((val >> 16) & 0xFF) - IMAGE_MEAN) / IMAGE_STD);imgData.putFloat((((val >> 8) & 0xFF) - IMAGE_MEAN) / IMAGE_STD);imgData.putFloat((((val) & 0xFF) - IMAGE_MEAN) / IMAGE_STD);// imgData.putFloat(((val >> 16) & 0xFF) / IMAGE_STD);//  imgData.putFloat(((val >> 8) & 0xFF)/ IMAGE_STD );// imgData.putFloat(((val) & 0xFF)/ IMAGE_STD );}}long endTime = SystemClock.uptimeMillis();Log.d(TAG, "Timecost to put values into ByteBuffer: " + Long.toString(endTime - startTime));}/** Prints top-K labels, to be shown in UI as the results. */private String printTopKLabels() {for (int i = 0; i < labelList.size(); ++i) {sortedLabels.add(new AbstractMap.SimpleEntry<>(labelList.get(i), labelProbArray[0][i]));if (sortedLabels.size() > RESULTS_TO_SHOW) {sortedLabels.poll();}}String textToShow = "";final int size = sortedLabels.size();for (int i = 0; i < size; ++i) {Map.Entry<String, Float> label = sortedLabels.poll();textToShow = String.format("\n%s: %4.2f", label.getKey(), label.getValue()) + textToShow;}return textToShow;}}

MainActivity.java

package gdut.bsx.tensorflowtraining;import android.Manifest;
import android.content.Context;
import android.content.DialogInterface;
import android.content.Intent;
import android.content.pm.PackageManager;
import android.graphics.Bitmap;
import android.graphics.Matrix;
import android.net.Uri;
import android.os.Build;
import android.os.Bundle;
import android.os.Looper;
import android.os.MessageQueue;
import android.provider.MediaStore;
import android.provider.Settings;
import android.support.annotation.NonNull;
import android.support.annotation.Nullable;
import android.support.v4.app.ActivityCompat;
import android.support.v4.content.ContextCompat;
import android.support.v4.content.FileProvider;
import android.support.v7.app.AlertDialog;
import android.support.v7.app.AppCompatActivity;
import android.support.v7.app.AppCompatDelegate;
import android.util.Log;
import android.view.View;
import android.widget.ImageView;
import android.widget.TextView;
import android.widget.Toast;import com.bumptech.glide.load.DataSource;
import com.bumptech.glide.load.engine.GlideException;
import com.bumptech.glide.request.RequestListener;
import com.bumptech.glide.request.target.Target;import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.Executor;
import java.util.concurrent.ScheduledThreadPoolExecutor;
import java.util.concurrent.ThreadFactory;blic class MainActivity extends AppCompatActivity implements View.OnClickListener {public static final String TAG = MainActivity.class.getSimpleName();private static final int OPEN_SETTING_REQUEST_COED = 110;private static final int TAKE_PHOTO_REQUEST_CODE = 120;private static final int PICTURE_REQUEST_CODE = 911;private static final int PERMISSIONS_REQUEST = 108;private static final int CAMERA_PERMISSIONS_REQUEST_CODE = 119;private static final String CURRENT_TAKE_PHOTO_URI = "currentTakePhotoUri";private gdut.bsx.tensorflowtraining.ImageClassifier imageClassifier;static final int dstWidth = 224;static final int dstHeight = 224;private Executor executor;private Uri currentTakePhotoUri;private TextView result;private ImageView ivPicture;static {AppCompatDelegate.setCompatVectorFromResourcesEnabled(true);}@Overrideprotected void onCreate(Bundle savedInstanceState) {super.onCreate(savedInstanceState);if (!isTaskRoot()) {finish();}setContentView(R.layout.activity_main);findViewById(R.id.iv_choose_picture).setOnClickListener(this);findViewById(R.id.iv_take_photo).setOnClickListener(this);ivPicture = findViewById(R.id.iv_picture);result = findViewById(R.id.tv_classifier_info);initialiseImageClassifier(this);// 避免耗时任务占用 CPU 时间片造成UI绘制卡顿,提升启动页面加载速度Looper.myQueue().addIdleHandler(idleHandler);}@Overridepublic void onSaveInstanceState(Bundle savedInstanceState){// 防止拍照后无法返回当前 activity 时数据丢失savedInstanceState.putParcelable(CURRENT_TAKE_PHOTO_URI, currentTakePhotoUri);super.onSaveInstanceState(savedInstanceState);}@Overrideprotected void onRestoreInstanceState(Bundle savedInstanceState) {super.onRestoreInstanceState(savedInstanceState);if (savedInstanceState != null) {currentTakePhotoUri = savedInstanceState.getParcelable(CURRENT_TAKE_PHOTO_URI);}}/***  主线程消息队列空闲时(视图第一帧绘制完成时)处理耗时事件*/MessageQueue.IdleHandler idleHandler = new MessageQueue.IdleHandler() {@Overridepublic boolean queueIdle() {// 初始化线程池executor = new ScheduledThreadPoolExecutor(1, new ThreadFactory() {@Overridepublic Thread newThread(@NonNull Runnable r) {Thread thread = new Thread(r);thread.setDaemon(true);thread.setName("ThreadPool-ImageClassifier");return thread;}});// 请求权限requestMultiplePermissions();return false;}};/*** 请求存储和相机权限*/private void requestMultiplePermissions() {String storagePermission = Manifest.permission.WRITE_EXTERNAL_STORAGE;String cameraPermission = Manifest.permission.CAMERA;int hasStoragePermission = ActivityCompat.checkSelfPermission(this, storagePermission);int hasCameraPermission = ActivityCompat.checkSelfPermission(this, cameraPermission);List<String> permissions = new ArrayList<>();if (hasStoragePermission != PackageManager.PERMISSION_GRANTED) {permissions.add(storagePermission);}if (hasCameraPermission != PackageManager.PERMISSION_GRANTED) {permissions.add(cameraPermission);}if (!permissions.isEmpty()) {String[] params = permissions.toArray(new String[permissions.size()]);ActivityCompat.requestPermissions(this, params, PERMISSIONS_REQUEST);}}@Overridepublic void onRequestPermissionsResult(int requestCode, String[] permissions, int[] grantResults){if (requestCode == PERMISSIONS_REQUEST) {if (Manifest.permission.WRITE_EXTERNAL_STORAGE.equals(permissions[0]) && grantResults[0] != PackageManager.PERMISSION_GRANTED) {//permission denied 显示对话框告知用户必须打开权限 (storagePermission )// Should we show an explanation?// 当app完全没有机会被授权的时候,调用shouldShowRequestPermissionRationale() 返回falseif (ActivityCompat.shouldShowRequestPermissionRationale(this,Manifest.permission.WRITE_EXTERNAL_STORAGE)) {// 系统弹窗提示授权showNeedStoragePermissionDialog();} else {// 已经被禁止的状态,比如用户在权限对话框中选择了"不再显示”,需要自己弹窗解释showMissingStoragePermissionDialog();}}} else if (requestCode == CAMERA_PERMISSIONS_REQUEST_CODE) {if (grantResults[0] != PackageManager.PERMISSION_GRANTED) {showNeedCameraPermissionDialog();} else {openSystemCamera();}}}/***  显示缺失权限提示,可再次请求动态权限*/private void showNeedStoragePermissionDialog() {new AlertDialog.Builder(this).setTitle("权限获取提示").setMessage("必须要有存储权限才能获取到图片").setNegativeButton("取消", new DialogInterface.OnClickListener() {@Override public void onClick(DialogInterface dialog, int which) {dialog.cancel();}}).setPositiveButton("确定", new DialogInterface.OnClickListener() {@Override public void onClick(DialogInterface dialog, int which) {ActivityCompat.requestPermissions(MainActivity.this,new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, PERMISSIONS_REQUEST);}}).setCancelable(false).show();}/***  显示权限被拒提示,只能进入设置手动改*/private void showMissingStoragePermissionDialog() {new AlertDialog.Builder(this).setTitle("权限获取失败").setMessage("必须要有存储权限才能正常运行").setNegativeButton("取消", new DialogInterface.OnClickListener() {@Override public void onClick(DialogInterface dialog, int which) {MainActivity.this.finish();}}).setPositiveButton("去设置", new DialogInterface.OnClickListener() {@Override public void onClick(DialogInterface dialog, int which) {startAppSettings();}}).setCancelable(false).show();}private void showNeedCameraPermissionDialog() {new AlertDialog.Builder(this).setMessage("摄像头权限被关闭,请开启权限后重试").setPositiveButton("确定", new DialogInterface.OnClickListener() {@Overridepublic void onClick(DialogInterface dialog, int which) {dialog.dismiss();}}).create().show();}private static final String PACKAGE_URL_SCHEME = "package:";/*** 启动应用的设置进行授权*/private void startAppSettings() {Intent intent = new Intent(Settings.ACTION_APPLICATION_DETAILS_SETTINGS);intent.setData(Uri.parse(PACKAGE_URL_SCHEME + getPackageName()));startActivityForResult(intent, OPEN_SETTING_REQUEST_COED);}@Overridepublic void onClick(View view) {switch (view.getId()) {case R.id.iv_choose_picture :choosePicture();break;case R.id.iv_take_photo :takePhoto();break;default:break;}}/*** 选择一张图片并裁剪获得一个小图*/private void choosePicture() {Intent intent = new Intent(Intent.ACTION_GET_CONTENT);intent.setType("image/*");startActivityForResult(intent, PICTURE_REQUEST_CODE);}/*** 使用系统相机拍照*/private void takePhoto() {if (ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) {ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.CAMERA}, CAMERA_PERMISSIONS_REQUEST_CODE);} else {openSystemCamera();}}/*** 打开系统相机*/private void openSystemCamera() {//调用系统相机Intent takePhotoIntent = new Intent();takePhotoIntent.setAction(MediaStore.ACTION_IMAGE_CAPTURE);//这句作用是如果没有相机则该应用不会闪退,要是不加这句则当系统没有相机应用的时候该应用会闪退if (takePhotoIntent.resolveActivity(getPackageManager()) == null) {Toast.makeText(this, "当前系统没有可用的相机应用", Toast.LENGTH_SHORT).show();return;}String fileName = "TF_" + System.currentTimeMillis() + ".jpg";File photoFile = new File(FileUtil.getPhotoCacheFolder(), fileName);if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.N) {//通过FileProvider创建一个content类型的UricurrentTakePhotoUri = FileProvider.getUriForFile(this, "gdut.bsx.tensorflowtraining.fileprovider", photoFile);//对目标应用临时授权该 Uri 所代表的文件takePhotoIntent.addFlags(Intent.FLAG_GRANT_READ_URI_PERMISSION);} else {currentTakePhotoUri = Uri.fromFile(photoFile);}//将拍照结果保存至 outputFile 的Uri中,不保留在相册中takePhotoIntent.putExtra(MediaStore.EXTRA_OUTPUT, currentTakePhotoUri);startActivityForResult(takePhotoIntent, TAKE_PHOTO_REQUEST_CODE);}@Overrideprotected void onActivityResult(int requestCode, int resultCode, Intent data) {super.onActivityResult(requestCode, resultCode, data);if (resultCode == RESULT_OK) {if (requestCode == PICTURE_REQUEST_CODE) {// 处理选择的图片handleInputPhoto(data.getData());} else if (requestCode == OPEN_SETTING_REQUEST_COED){requestMultiplePermissions();} else if (requestCode == TAKE_PHOTO_REQUEST_CODE) {// 如果拍照成功,加载图片并识别handleInputPhoto(currentTakePhotoUri);}}}/*** 处理图片* @param imageUri*/private void handleInputPhoto(Uri imageUri) {// 加载图片GlideApp.with(MainActivity.this).asBitmap().listener(new RequestListener<Bitmap>() {@Overridepublic boolean onLoadFailed(@Nullable GlideException e, Object model, Target<Bitmap> target, boolean isFirstResource) {Log.d(TAG,"handleInputPhoto onLoadFailed");Toast.makeText(MainActivity.this, "图片加载失败", Toast.LENGTH_SHORT).show();return false;}@Overridepublic boolean onResourceReady(Bitmap resource, Object model, Target<Bitmap> target, DataSource dataSource, boolean isFirstResource) {Log.d(TAG,"handleInputPhoto onResourceReady");startImageClassifier(resource);return false;}}).load(imageUri).into(ivPicture);//result.setText("Processing...");}/*** 开始图片识别匹配* @param bitmap*/private void startImageClassifier(final Bitmap bitmap) {executor.execute(new Runnable() {@Overridepublic void run() {Log.i(TAG, Thread.currentThread().getName() + " startImageClassifier");Bitmap reshapeBitmap = Bitmap.createScaledBitmap(bitmap, dstWidth, dstHeight, false);final String results = imageClassifier.classifyFrame(reshapeBitmap);Log.i(TAG, "startImageClassifier results: " + results);runOnUiThread(new Runnable() {@Overridepublic void run() {result.setText(results);}});}});}private void initialiseImageClassifier(Context app) {try {imageClassifier = gdut.bsx.tensorflowtraining.ImageClassifier.getInstance(app);} catch (Exception e) {Log.e(TAG, "Failed to initialize an image classifier.");}}}

FileUtil.java

package gdut.bsx.tensorflowtraining;import android.content.Context;
import android.media.MediaScannerConnection;
import android.net.Uri;
import android.os.Environment;
import android.text.TextUtils;
import android.util.Log;import java.io.File;public class FileUtil {/*** 删除或增加图片、视频等媒体资源文件时 通知系统更新媒体库,重新扫描* @param filePath 文件路径,包括后缀*/public static void notifyScanMediaFile(Context context, String filePath) {if (context == null || TextUtils.isEmpty(filePath)){Log.e("FileUtil", "notifyScanMediaFile context is null or filePath is empty.");return;}MediaScannerConnection.scanFile(context, new String[] {filePath}, null, new MediaScannerConnection.OnScanCompletedListener() {@Overridepublic void onScanCompleted(String path, Uri uri) {Log.i("FileUtil", "onScanCompleted");}});}public static File getPhotoCacheFolder() {File cacheFolder = new File(Environment.getExternalStorageDirectory(), "TensorFlowPhotos");if (!cacheFolder.exists()) {cacheFolder.mkdirs();}return cacheFolder;}
}

其中源代码可以通过以下链接获得。
全部源程序下载

如何将深度学习模型加载到android环境中相关推荐

  1. NVIDIA GEFORCE 2080 / 2080 SUPER / 2080 Ti + CUDA Toolkit 8.0 深度学习模型加载速度慢

    NVIDIA GEFORCE 2080 / 2080 SUPER / 2080 Ti + CUDA Toolkit 8.0 深度学习模型加载速度慢 (卡顿) GEFORCE RTX 2080 / GE ...

  2. 【深度学习-数据加载优化-训练速度提升一倍】

    1,介绍 数据加载 深度学习的训练,简单的说就是将数据切分成batch,丢入模型中,并计算loss训练.其中比较重要的一环是数据打batch部分(数据加载部分). 训练时间优化: 深度学习训练往往需要 ...

  3. Tensorflow移动端之如何将自己训练的MNIST模型加载到Android手机上

    本篇文章主要依托于官方demo,在官网demo上进行修改来体现如何在一个常规的app上加入深度学习的模型.因为对于在app中加入对应的模型也只是将app搜集的数据导入模型并进行处理,处理完之后将结果返 ...

  4. android viewpager动态加载页面,Android viewpager中动态添加view并实现伪无限循环的方法...

    本文实例讲述了Android viewpager中动态添加view并实现伪无限循环的方法.分享给大家供大家参考,具体如下: viewpager的使用,大家都熟悉,它可以实现页面之间左右滑动的切换,这里 ...

  5. 加载tf模型 正确率很低_深度学习模型训练全流程!

    ↑↑↑关注后"星标"Datawhale 每日干货 & 每月组队学习,不错过 Datawhale干货 作者:黄星源.奉现,Datawhale优秀学习者 本文从构建数据验证集. ...

  6. TVM将深度学习模型编译为WebGL

    TVM将深度学习模型编译为WebGL TVM带有全新的OpenGL / WebGL后端! OpenGL / WebGL后端 TVM已经瞄准了涵盖各种平台的大量后端:CPU,GPU,移动设备等.这次,添 ...

  7. 深度学习模型保存_解读计算机视觉的深度学习模型

    作者 | Dipanjan(DJ)Sarkar 来源 | Medium 编辑 | 代码医生团队 介绍 人工智能(AI)不再仅限于研究论文和学术界.业内不同领域的企业和组织正在构建由AI支持的大规模应用 ...

  8. 【深度学习】深度学习模型训练全流程!

    Datawhale干货 作者:黄星源.奉现,Datawhale优秀学习者 本文从构建数据验证集.模型训练.模型加载和模型调参四个部分对深度学习中模型训练的全流程进行讲解. 一个成熟合格的深度学习训练流 ...

  9. 使用ANNdotNET GUI工具创建CIFAR-10深度学习模型

    目录 编辑说明 数据准备 在Anndotnet中创建新的图像分类项目文件 在ANNdotNET中创建mlconfig 创建网络配置 结论 在这篇文章中,我们将为CIFAR-10数据集创建和训练深度学习 ...

最新文章

  1. OpenStack环境搭建(一:Virtual Box 5.1 环境的安装及配置)
  2. Linux Kernel 5.0或在达成600万Git Objects时到来
  3. [Nikon D80]Beauty
  4. VMware Workstation安装64位操作系统遇到的小问题
  5. 从多篇2021年顶会论文看多模态预训练模型最新研究进展
  6. Windows Server 2012 R2 虚拟机迁移 出错 21502 0x80070490 解决
  7. 纯注解开发配置spring
  8. HNOI2015 开店
  9. visual studio运行时库MT、MTd、MD、MDd的区别
  10. 泛微oa系统什么框架_泛微OA系统表结构说明文档
  11. ListView优化方案及其原理
  12. 用计算机画图教案评价,电脑画图教案
  13. s5pv210时钟系统详解
  14. 中证登姚前演讲:数字资产是数字金融的核心(全文)
  15. 玩转Spring Cloud Security OAuth2身份认证扩展——电话号码+验证码认证
  16. Linux0.11学习研究
  17. java静态变量、静态方法、代码块、main方法
  18. SSD202 移植 新屏幕
  19. 协同过滤算法的简单理解《推荐系统实践》
  20. 51单片机串口SBUF是特殊寄存器,只要不写入新的数据就不会消失,写入新的数据就会覆盖之前的,单片机复位后初始值为不确定值

热门文章

  1. Redis主从配置详细流程
  2. 同济高等数学第三章之经典错误知识点笔记
  3. 急!!!微信公众号数据迁移后openid无法转换
  4. 提供一个免费的directui界面库(已开源)
  5. java中获取一个集合(Set)的子集的方法
  6. Linux文件操作命令及磁盘分区与文件系统
  7. 【机器人基础】机器人阻抗控制概念
  8. Android之RatingBar
  9. Qt 之 pro文件介绍及注意点
  10. java分页(java分页插件pagehelper)