OpenSfM开源软件使用小心得
OpenSfM开源软件使用小心得
- 1 常用的开源Structure-from-Motion软件
- 2 OpenSfM的简易运行(基于Linux)
- 3 OpenSfM的运行命令
- 4 个性化OpenSfM
- 4.1 使用自己的图片
- 4.2 提供自己的相机内参
- 5 运行结果
- 6 运行经验
- Reference
1 常用的开源Structure-from-Motion软件
- OpenMVG 3.5k⭐
- Colmap 2.9k⭐
- OpenSfM (Python) 2.1k⭐
- Bundler 1.3k⭐
- MVE 758⭐
- TheiaSfM 652⭐
- MICMAC 292⭐
(⭐表示Github上的star数)
【注】
Bundler是由SfM创始人的华盛顿大学的Snavely教授所写。
上述7个软件,只有OpenSfM是用Python写的,其他软件语言均为C++。即使是OpenSfM也使用了Python binding。主要原因是OpenSfM也使用了C++的非线性优化包Ceres(基本上每个SfM、SLAM软件都离不开该库)。
2 OpenSfM的简易运行(基于Linux)
(我的操作系统是Ubuntu20.04LTS,OpenSfM纯粹使用CPU运行,没有显卡、显卡驱动那些玩意儿也完全没有关系)
(注:OpenSfM并不是一下载下来就能用,要按照官方文档的步骤,需要安装依赖并编译才能用。这篇博客不涉及安装OpenSfM的步骤。)
运行整个OpenSfM最方便的办法是:终端定位到OpenSfM的根目录,然后输入以下命令
//berlin是存放示例图片的文件夹。
bin/opensfm_run_all data/berlin
实际上,该命令执行了根目录的bin文件夹中的opensfm_run_all data文件,该文件就是8个命令。
$PYTHON $DIR/opensfm extract_metadata $1
$PYTHON $DIR/opensfm detect_features $1
$PYTHON $DIR/opensfm match_features $1
$PYTHON $DIR/opensfm create_tracks $1
$PYTHON $DIR/opensfm reconstruct $1
$PYTHON $DIR/opensfm mesh $1
$PYTHON $DIR/opensfm undistort $1
$PYTHON $DIR/opensfm compute_depthmaps $1
这8个命令执行完了SfM的整个流程,输出数据在根目录下的data/berlin下。
berlin/
├── config.yaml
├── images/
├── masks/
├── gcp_list.txt
├── exif/
├── camera_models.json
├── features/
├── matches/
├── tracks.csv
├── reconstruction.json
├── reconstruction.meshed.json
└── undistorted/├── images/├── masks/├── tracks.csv├── reconstruction.json└── depthmaps/└── merged.ply
merged.ply可以用Meshlab软件打开,查看重建效果。
reconstruction.json中含有所有重建的三维点坐标(structure),以及相机参数(motion)。格式如下:
// reconstruction.json: [RECONSTRUCTION, ...]RECONSTRUCTION: {"cameras": {CAMERA_ID: CAMERA,...},"shots": {SHOT_ID: SHOT,...},"points": {POINT_ID: POINT,...}
}CAMERA: {"projection_type": "perspective", # Can be perspective, brown, fisheye or equirectangular"width": NUMBER, # Image width in pixels"height": NUMBER, # Image height in pixels# Depending on the projection type more parameters are stored.# These are the parameters of the perspective camera."focal": NUMBER, # Estimated focal length"k1": NUMBER, # Estimated distortion coefficient"k2": NUMBER, # Estimated distortion coefficient
}SHOT: {"camera": CAMERA_ID,"rotation": [X, Y, Z], # Estimated rotation as an angle-axis vector"translation": [X, Y, Z], # Estimated translation"gps_position": [X, Y, Z], # GPS coordinates in the reconstruction reference frame"gps_dop": METERS, # GPS accuracy in meters"orientation": NUMBER, # EXIF orientation tag (can be 1, 3, 6 or 8)"capture_time": SECONDS # Capture time as a UNIX timestamp
}POINT: {"coordinates": [X, Y, Z], # Estimated position of the point"color": [R, G, B], # Color of the point
}
该文件借助服务器也可以可视化。
3 OpenSfM的运行命令
OpenSfM官方文档提供的所有命令一共有这么几条,功能在右边解说了。
extract_metadata # Extract metadata form images' EXIF tagdetect_features # Compute features for all imagesmatch_features # Match features between image pairscreate_tracks # Link matches pair-wise matches into tracksreconstruct # Compute the reconstructionmesh # Add delaunay meshes to the reconstructionundistort # Save radially undistorted imagescompute_depthmaps # Compute depthmapexport_ply # Export reconstruction to PLY formatexport_openmvs # Export reconstruction to openMVS formatexport_visualsfm # Export reconstruction to NVM_V3 format from VisualSfM
逐条运行以上命令,和直接执行简便运行方式的如下命令没有区别。
//berlin是存放示例图片的文件夹。
bin/opensfm_run_all data/berlin
它们本质上都是执行了根目录下的opensfm/commands中的同名文件。举个例子, reconstruct命令就是运行了opensfm/commands/reconstruct.py文件。该文件内容是这样的:
//以下代码是opensfm/commands/reconstruct.py文件中的全部内容
import logging
import timefrom opensfm import dataset
from opensfm import io
from opensfm import reconstructionlogger = logging.getLogger(__name__)class Command:name = 'reconstruct'help = "Compute the reconstruction"def add_arguments(self, parser):parser.add_argument('dataset', help='dataset to process')def run(self, args):start = time.time()data = dataset.DataSet(args.dataset)tracks_manager = data.load_tracks_manager()report, reconstructions = reconstruction.\incremental_reconstruction(data, tracks_manager)end = time.time()with open(data.profile_log(), 'a') as fout:fout.write('reconstruct: {0}\n'.format(end - start))data.save_reconstruction(reconstructions)data.save_report(io.json_dumps(report), 'reconstruction.json')
其中最重要的一句莫过于incremental_reconstruction()函数。这个函数是在opensfm/reconstruct.py中被定义的,opensfm/reconstruct.py文件总共1300行,完成了增量式SfM重建的整个流程。
4 个性化OpenSfM
4.1 使用自己的图片
要使用自己的图片运行OpenSfM也非常方便。只需要在根目录data文件夹下新建一个文件夹(比如我们取名叫mytest),再新建mytest/images文件夹,将自己的图片们放进mytest/images。然后把示例berlin文件夹下的config.yaml文件复制一份到mytest中,最后下命令运行即可。
我们还可以在config.yaml文件中个性化自己的运行设置。默认的运行设置是这样的:
# Metadata
use_exif_size: yes
default_focal_prior: 0.85# Params for features
feature_type: HAHOG # Feature type (AKAZE, SURF, SIFT, HAHOG, ORB)
feature_root: 1 # If 1, apply square root mapping to features
feature_min_frames: 4000 # If fewer frames are detected, sift_peak_threshold/surf_hessian_threshold is reduced.
feature_process_size: 2048 # Resize the image if its size is larger than specified. Set to -1 for original size
feature_use_adaptive_suppression: no# Params for SIFT
sift_peak_threshold: 0.1 # Smaller value -> more features
sift_edge_threshold: 10 # See OpenCV doc# Params for SURF
surf_hessian_threshold: 3000 # Smaller value -> more features
surf_n_octaves: 4 # See OpenCV doc
surf_n_octavelayers: 2 # See OpenCV doc
surf_upright: 0 # See OpenCV doc# Params for AKAZE (See details in lib/src/third_party/akaze/AKAZEConfig.h)
akaze_omax: 4 # Maximum octave evolution of the image 2^sigma (coarsest scale sigma units)
akaze_dthreshold: 0.001 # Detector response threshold to accept point
akaze_descriptor: MSURF # Feature type
akaze_descriptor_size: 0 # Size of the descriptor in bits. 0->Full size
akaze_descriptor_channels: 3 # Number of feature channels (1,2,3)
akaze_kcontrast_percentile: 0.7
akaze_use_isotropic_diffusion: no# Params for HAHOG
hahog_peak_threshold: 0.00001
hahog_edge_threshold: 10
hahog_normalize_to_uchar: yes# Params for general matching
lowes_ratio: 0.8 # Ratio test for matches
matcher_type: FLANN # FLANN, BRUTEFORCE, or WORDS
symmetric_matching: yes # Match symmetricly or one-way# Params for FLANN matching
flann_branching: 8 # See OpenCV doc
flann_iterations: 10 # See OpenCV doc
flann_checks: 20 # Smaller -> Faster (but might lose good matches)# Params for BoW matching
bow_file: bow_hahog_root_uchar_10000.npz
bow_words_to_match: 50 # Number of words to explore per feature.
bow_num_checks: 20 # Number of matching features to check.
bow_matcher_type: FLANN # Matcher type to assign words to features# Params for VLAD matching
vlad_file: bow_hahog_root_uchar_64.npz# Params for matching
matching_gps_distance: 150 # Maximum gps distance between two images for matching
matching_gps_neighbors: 0 # Number of images to match selected by GPS distance. Set to 0 to use no limit (or disable if matching_gps_distance is also 0)
matching_time_neighbors: 0 # Number of images to match selected by time taken. Set to 0 to disable
matching_order_neighbors: 0 # Number of images to match selected by image name. Set to 0 to disable
matching_bow_neighbors: 0 # Number of images to match selected by BoW distance. Set to 0 to disable
matching_bow_gps_distance: 0 # Maximum GPS distance for preempting images before using selection by BoW distance. Set to 0 to disable
matching_bow_gps_neighbors: 0 # Number of images (selected by GPS distance) to preempt before using selection by BoW distance. Set to 0 to use no limit (or disable if matching_bow_gps_distance is also 0)
matching_bow_other_cameras: False # If True, BoW image selection will use N neighbors from the same camera + N neighbors from any different camera.
matching_vlad_neighbors: 0 # Number of images to match selected by VLAD distance. Set to 0 to disable
matching_vlad_gps_distance: 0 # Maximum GPS distance for preempting images before using selection by VLAD distance. Set to 0 to disable
matching_vlad_gps_neighbors: 0 # Number of images (selected by GPS distance) to preempt before using selection by VLAD distance. Set to 0 to use no limit (or disable if matching_vlad_gps_distance is also 0)
matching_vlad_other_cameras: False # If True, VLAD image selection will use N neighbors from the same camera + N neighbors from any different camera.
matching_use_filters: False # If True, removes static matches using ad-hoc heuristics# Params for geometric estimation
robust_matching_threshold: 0.004 # Outlier threshold for fundamental matrix estimation as portion of image width
robust_matching_calib_threshold: 0.004 # Outlier threshold for essential matrix estimation during matching in radians
robust_matching_min_match: 20 # Minimum number of matches to accept matches between two images
five_point_algo_threshold: 0.004 # Outlier threshold for essential matrix estimation during incremental reconstruction in radians
five_point_algo_min_inliers: 20 # Minimum number of inliers for considering a two view reconstruction valid
five_point_refine_match_iterations: 10 # Number of LM iterations to run when refining relative pose during matching
five_point_refine_rec_iterations: 1000 # Number of LM iterations to run when refining relative pose during reconstruction
triangulation_threshold: 0.006 # Outlier threshold for accepting a triangulated point in radians
triangulation_min_ray_angle: 1.0 # Minimum angle between views to accept a triangulated point
triangulation_type: FULL # Triangulation type : either considering all rays (FULL), or sing a RANSAC variant (ROBUST)
resection_threshold: 0.004 # Outlier threshold for resection in radians
resection_min_inliers: 10 # Minimum number of resection inliers to accept it# Params for track creation
min_track_length: 2 # Minimum number of features/images per track# Params for bundle adjustment
loss_function: SoftLOneLoss # Loss function for the ceres problem (see: http://ceres-solver.org/modeling.html#lossfunction)
loss_function_threshold: 1 # Threshold on the squared residuals. Usually cost is quadratic for smaller residuals and sub-quadratic above.
reprojection_error_sd: 0.004 # The standard deviation of the reprojection error
exif_focal_sd: 0.01 # The standard deviation of the exif focal length in log-scale
principal_point_sd: 0.01 # The standard deviation of the principal point coordinates
radial_distorsion_k1_sd: 0.01 # The standard deviation of the first radial distortion parameter
radial_distorsion_k2_sd: 0.01 # The standard deviation of the second radial distortion parameter
radial_distorsion_k3_sd: 0.01 # The standard deviation of the third radial distortion parameter
radial_distorsion_p1_sd: 0.01 # The standard deviation of the first tangential distortion parameter
radial_distorsion_p2_sd: 0.01 # The standard deviation of the second tangential distortion parameter
bundle_outlier_filtering_type: FIXED # Type of threshold for filtering outlier : either fixed value (FIXED) or based on actual distribution (AUTO)
bundle_outlier_auto_ratio: 3.0 # For AUTO filtering type, projections with larger reprojection than ratio-times-mean, are removed
bundle_outlier_fixed_threshold: 0.006 # For FIXED filtering type, projections with larger reprojection error after bundle adjustment are removed
optimize_camera_parameters: yes # Optimize internal camera parameters during bundle
bundle_max_iterations: 100 # Maximum optimizer iterations.retriangulation: yes # Retriangulate all points from time to time
retriangulation_ratio: 1.2 # Retriangulate when the number of points grows by this ratio
bundle_interval: 999999 # Bundle after adding 'bundle_interval' cameras
bundle_new_points_ratio: 1.2 # Bundle when the number of points grows by this ratio
local_bundle_radius: 3 # Max image graph distance for images to be included in local bundle adjustment
local_bundle_min_common_points: 20 # Minimum number of common points betwenn images to be considered neighbors
local_bundle_max_shots: 30 # Max number of shots to optimize during local bundle adjustmentsave_partial_reconstructions: no # Save reconstructions at every iteration# Params for GPS alignment
use_altitude_tag: no # Use or ignore EXIF altitude tag
align_method: orientation_prior # orientation_prior or naive
align_orientation_prior: horizontal # horizontal, vertical or no_roll
bundle_use_gps: yes # Enforce GPS position in bundle adjustment
bundle_use_gcp: no # Enforce Ground Control Point position in bundle adjustment# Params for navigation graph
nav_min_distance: 0.01 # Minimum distance for a possible edge between two nodes
nav_step_pref_distance: 6 # Preferred distance between camera centers
nav_step_max_distance: 20 # Maximum distance for a possible step edge between two nodes
nav_turn_max_distance: 15 # Maximum distance for a possible turn edge between two nodes
nav_step_forward_view_threshold: 15 # Maximum difference of angles in degrees between viewing directions for forward steps
nav_step_view_threshold: 30 # Maximum difference of angles in degrees between viewing directions for other steps
nav_step_drift_threshold: 36 # Maximum motion drift with respect to step directions for steps in degrees
nav_turn_view_threshold: 40 # Maximum difference of angles in degrees with respect to turn directions
nav_vertical_threshold: 20 # Maximum vertical angle difference in motion and viewing direction in degrees
nav_rotation_threshold: 30 # Maximum general rotation in degrees between cameras for steps# Params for image undistortion
undistorted_image_format: jpg # Format in which to save the undistorted images
undistorted_image_max_size: 100000 # Max width and height of the undistorted image# Params for depth estimation
depthmap_method: PATCH_MATCH_SAMPLE # Raw depthmap computation algorithm (PATCH_MATCH, BRUTE_FORCE, PATCH_MATCH_SAMPLE)
depthmap_resolution: 640 # Resolution of the depth maps
depthmap_num_neighbors: 10 # Number of neighboring views
depthmap_num_matching_views: 6 # Number of neighboring views used for each depthmaps
depthmap_min_depth: 0 # Minimum depth in meters. Set to 0 to auto-infer from the reconstruction.
depthmap_max_depth: 0 # Maximum depth in meters. Set to 0 to auto-infer from the reconstruction.
depthmap_patchmatch_iterations: 3 # Number of PatchMatch iterations to run
depthmap_patch_size: 7 # Size of the correlation patch
depthmap_min_patch_sd: 1.0 # Patches with lower standard deviation are ignored
depthmap_min_correlation_score: 0.1 # Minimum correlation score to accept a depth value
depthmap_same_depth_threshold: 0.01 # Threshold to measure depth closeness
depthmap_min_consistent_views: 3 # Min number of views that should reconstruct a point for it to be valid
depthmap_save_debug_files: no # Save debug files with partial reconstruction results# Other params
processes: 1 # Number of threads to use# Params for submodel split and merge
submodel_size: 80 # Average number of images per submodel
submodel_overlap: 30.0 # Radius of the overlapping region between submodels
submodels_relpath: "submodels" # Relative path to the submodels directory
submodel_relpath_template: "submodels/submodel_%04d" # Template to generate the relative path to a submodel directory
submodel_images_relpath_template: "submodels/submodel_%04d/images" # Template to generate the relative path to a submodel images directory
如果需要改变其中某个参数,写入config.yaml文件即可。譬如,默认的特征点检测方法是HAHOG,我要改成SIFT算法,那么就在config.yaml文件中加下面这句话:
# Params for features
feature_type: SIFT # Feature type (AKAZE, SURF, SIFT, HAHOG, ORB)
4.2 提供自己的相机内参
如果我们已知输入图片的相机内参,可以直接提供给OpenSfM。这样,OpenSfM便免去了自己估算相机内参的麻烦,从而让我们的重建更准确。
具体方法是在项目的文件夹(如mytest)中创建一个camera_models_overrides.json文件,在其中存放已知的相机内存。这样,我们在camera_models_overrides.json提供的内参就会override默认的camera_models.json文件。即,在执行extract_metadata时,camera_models_overrides.json中的内参数据会被拷贝到camera_models.json中去,为OpenSfM后续操作所用。
例如,我已知所有输入图片的相机内参,它们都是同样的值。为了将此内参提供给程序,只需新建camera_models_overrides.json文件,写入如下内容即可。
//相机内参示例
{"all": {"projection_type": "perspective","width": 1920,"height": 1080,"focal": 0.9,"k1": 0.0,"k2": 0.0,}
}
5 运行结果
我运行过的效果最好的一组图片是http://www.robots.ox.ac.uk/~vgg/data/mview/网站的Model House。一共10张图片,用默认参数跑,重建出了3378个三维点。
输入图片们大概长这样:重建效果是这样:
需要说明的是,OpenSfM对背景乱七八糟、或者特征很少的图片鲁棒性不算很好。比如这组苹果就重建糊了。输入以下15张图片:
得到了这样的结果:
6 运行经验
在调整过一些参数、输入过几个不同的数据集后,我发现了几个小事实。
自己输入的相机内参没有在Bundle Adjustment中被当作不变量。当我给OpenSfM提供自己的相机内参后运行,重建完成后,在reconstruction.json中可以发现相机内参变了。原因应该是:我们写在camera_models_overrides.json的内参只是作为初值提供给OpenSfM,当OpenSfM在运行中进行Bundle Adjustment仍会把相机内参作为变量一同进行非线性优化。所以,在camera_models_overrides.json写入内参的意义其实是“提供初值”,如果不提供内参,程序就使用自己的默认初值,如此而已。
不过,提供自己的内参的确会让最终的重建结果不同(起码最终给出的相机内参是不同的)。使用过多的图片输入有可能导致程序中道崩殂,给出以下报错而退出:
我目前只想到且尝试过的2种有用方法。一是:在config.yaml文件里设置更小的线程数。例如,我使用89张图片进行重建;10线程的时候程序退报错出了,8线程的时候就成功运行完成了。二是:如果使用的是连续、变化跨度不大的视频的帧,可以间隔地取用图片。例如,一组104张图片的数据集,使用全部104张图片,重建了18922个三维点,花费约43分钟;而间隔着选用54张图片。重建了18608个点,花费约12分钟。使用连续、变化跨度不大的视频的帧,间隔地取用/全部图片部分图片,重建结果只有边界有少量区别。我使用了Meshlab软件计算104张全部图片、54张部分图片的重建结果对比,可以可视化两点云的差别。具体是使用了“Sampling”中的“Hausdorff Distance”功能,再进行可视化“Color Creation and Processing”-“Colorize By Vertex Quality”得到的。Meshlab具体操作步骤参见另一篇博文。
Meshlab的输出如下:
Hausdorff Distance computed
Sampled 1601604 pts (rng: 0) on merged(1).ply searched closest on merged.ply
min : 0.000538 max 1.193911 mean : 0.238936 RMS : 0.276795
Values w.r.t. BBox Diag (14.006348)
min : 0.000038 max 0.085241 mean : 0.017059 RMS : 0.019762
Applied filter Hausdorff Distance in 26137 msec
Quality Range: 0.000538 1.193911; Used (0.000538 1.190000)
Applied filter Colorize by vertex Quality in 129 msec
- 如果需要提高OpenSfM的运行速度,且输入数据是连续的视频的帧(large linear dataset),可以尝试在config.yaml文件中加上以下两句话:
local_bundle_radius: 1 # Max image graph distance for images to be included in local bundle adjustment
bundle_interval: 9999999 # Bundle after adding 'bundle_interval' cameras
第一句的意思是增量式SfM逐张图片进行Bundle Adjustment做非线性优化时,每次添加一张新的图片只对新的图片和与其直接相邻的前一张图片做Bundle Adjustment。第二句的意思是全局的大Bundle Adjustment只在所有图片均录入后进行,而不在每次添加图片后都对当前的所有图片做Bundle Adjustment。
该方案由OpenSfM的一位作者在回答一个issue时提出。
Reference
OpenSfM官方网站:https://www.opensfm.org/
OpenSfM的Github主页:https://github.com/mapillary/OpenSfM
OpenSfM开源软件使用小心得相关推荐
- 【iOS开发每日小笔记(二)】gitHub上的开源“瀑布流”使用心得
这篇文章是我的[iOS开发每日小笔记]系列中的一片,记录的是今天在开发工作中遇到的,可以用很短的文章或很小的demo演示解释出来的小心得小技巧.它们可能会给用户体验.代码效率得到一些提升,或是之前自己 ...
- 从开源软件开发中体会到的心得
Mitchell Hashimoto 是一名开源软件工程师.由他托管到 GitHub 上的 开源项目 Vagrant,是一个用于创建和部署虚拟化开发环境的工具.近日,Mitchell撰文讲述了在开发 ...
- 开源软件在中小企业的应用_开源如何启动我的小企业
开源软件在中小企业的应用 开源硬件确实改变了我的生活. 它使我得以开展自己的事业. 您可能会问:"怎么做?" 好吧,让我们在记忆里漫步一下,好吗? 十七年前,作为美国海军学院的实习 ...
- 最受 IT 公司欢迎的 30 款开源软件
来源: http://www.360doc.com/content/20/0613/14/49290572_918245894.shtml 所谓开源,就是把软件的源代码开放出来,大家都能看到源代码,大 ...
- Linux系统编程 / 分析开源软件Triggerhappy
哈喽,我是老吴,继续记录我的学习心得 自制力太强导致低效专注 自制力很强的人的成功之道在于,在别人都放弃的情况下仍坚持不懈. 但是这反而会让他们难以关闭专注模式,导致无法进入解决难题所必要的发散模式. ...
- LVS(Linux Virtual Server,Linux虚拟服务器)开源软件创始人——章文嵩
章文嵩是技术专家,也是LVS(Linux Virtual Server,Linux虚拟服务器)开源软件创始人,曾经是TelTel公司的首席科学家,ChinaCluster的共同创办人.他对自己的看法是 ...
- 2006“IBM杯”中国高校SOA应用大赛禁止使用任何版本的GPL/LGPL license的开源软件
看着手头的这个 Word 文档,心情沉重. 这个世界怎么了? What's wrong with Free Software? IBM didn't support GPL. Should the c ...
- 华为开源构建工具_构建开源软件长达5年并以故事为生
华为开源构建工具 I've been working on open-source software for 5 years now and I'm still going. It's not som ...
- 「内部分享」阿里巴巴 开源软件列表、建议收藏!
点击上方的终端研发部,右上角选择"设为星标" 每日早9点半,技术文章准时送上 公众号后台回复"学习",获取作者独家秘制精品资料 往期文章 初探:Java虚拟机那 ...
最新文章
- 谷歌一员工确诊新冠肺炎:已大面积限制员工出行
- java web面试题大全_Java经典面试题之Java web开发汇总(附答案)
- mysql5.6 忘记root密码后,如何找回密码?
- Linux启动SAP服务,sap启动相关
- 手把手教你学习ROR-6.Rooter的配置
- javascript 面试题之一
- kali linux操作系统
- Java:基于LinkedList实现栈和队列
- web自动化神器,QuickTester
- opensuse leap 42.3安装网易云音乐
- 软件项目成本估算的基本方法
- vue怎么改logo_vue项目添加网页logo
- Unity显示FPS帧率
- 个人网站如何申请支付接口?(教程)
- 小白初上手HTML+CSS 仿写小米官网logo动画
- 设置苹果电脑vsode在新窗口中打开文件
- 问题记录:键盘win键无法使用,组合键无反应,win+L不能锁屏
- (转)中科院理论物理所考研…
- 【2020-COLING】Regularized Graph Convolutional Networks for Short Text Classification 用于短文本分类的正则化图卷积网络
- 360手机助手 无法android 4.2手机软件移动sd卡,360手机助手怎么设置安装到sd卡 360手机助手设置安装位置...