Ubuntu_ROS中应用kinect v2笔记

个人觉得最重要的资料如下:

1.

Microsoft Kinect v2 Driver Released

http://www.ros.org/news/2014/09/microsoft-kinect-v2-driver-released.html

2.

OpenKinect

https://github.com/OpenKinect/libfreenect2

3.

code-iai

https://github.com/code-iai/iai_kinect2

测试基本效果

http://v.youku.com/v_show/id_XMTQyNDAzNTM2OA

机器人课件分享

http://pan.baidu.com/s/1eRrM4QA

rtabmap kinect2:

http://official-rtab-map-forum.67519.x6.nabble.com/rtabmap-for-kinect-v2-on-ROS-Indigo-td61.html

https://github.com/introlab

附件::

libfreenect2

Table of Contents

  • Description
  • Requirements
  • Troubleshooting
  • Maintainers
  • Installation
    • Windows / Visual Studio
    • MacOS X
    • Linux
  • API Documentation (external)

Description

Driver for Kinect for Windows v2 (K4W2) devices (release and developer preview).

Note: libfreenect2 does not do anything for either Kinect for Windows v1 or Kinect for Xbox 360 sensors. Use libfreenect1 for those sensors.

If you are using libfreenect2 in an academic context, please cite our work using the following DOI:

If you use the KDE depth unwrapping algorithm implemented in the library, please also cite this ECCV 2016 paper.

This driver supports:

  • RGB image transfer
  • IR and depth image transfer
  • registration of RGB and depth images

Missing features:

  • firmware updates (see issue #460 for WiP)

Watch the OpenKinect wiki at www.openkinect.org and the mailing list at https://groups.google.com/forum/#!forum/openkinect for the latest developments and more information about the K4W2 USB protocol.

The API reference documentation is provided here https://openkinect.github.io/libfreenect2/.

Requirements

Hardware requirements

  • USB 3.0 controller. USB 2 is not supported.

Intel and NEC USB 3.0 host controllers are known to work. ASMedia controllers are known to not work.

Virtual machines likely do not work, because USB 3.0 isochronous transfer is quite delicate.

Requirements for multiple Kinects

It has been reported to work for up to 5 devices on a high-end PC using multiple separate PCI Express USB3 expansion cards (with NEC controller chip). If you're using Linux, you may have to increase USBFS memory buffers. Depending on the number of Kinects, you may need to use an even larger buffer size. If you're using an expansion card, make sure it's not plugged into an PCI-E x1 slot. A single lane doesn't have enough bandwidth. x8 or x16 slots usually work.

Operating system requirements

  • Windows 7 (buggy), Windows 8, Windows 8.1, and probably Windows 10
  • Debian, Ubuntu 14.04 or newer, probably other Linux distros. Recommend kernel 3.16+ or as new as possible.
  • Mac OS X

Requirements for optional features

  • OpenGL depth processing: OpenGL 3.1 (Windows, Linux, Mac OS X). OpenGL ES is not supported at the moment.
  • OpenCL depth processing: OpenCL 1.1
  • CUDA depth processing: CUDA (6.5 and 7.5 are tested; The minimum version is not clear.)
  • VAAPI JPEG decoding: Intel (minimum Ivy Bridge or newer) and Linux only
  • VideoToolbox JPEG decoding: Mac OS X only
  • OpenNI2 integration: OpenNI2 2.2.0.33
  • Jetson TK1: Linux4Tegra 21.3 or later. Check Jetson TK1 issues before installation. Jetson TX1 is not yet supported as the developers don't have one, but it may be easy to add the support.

Troubleshooting and reporting bugs

First, check https://github.com/OpenKinect/libfreenect2/wiki/Troubleshooting for known issues.

When you report USB issues, please attach relevant debug log from running the program with environment variable LIBUSB_DEBUG=3, and relevant log from dmesg. Also include relevant hardware information lspci and lsusb -t.

Maintainers

  • Joshua Blake joshblake@gmail.com
  • Florian Echtler
  • Christian Kerl
  • Lingzhu Xiang (development/master branch)

Installation

Windows / Visual Studio

  • Install UsbDk driver

    1. (Windows 7) You must first install Microsoft Security Advisory 3033929 otherwise your USB keyboards and mice will stop working!
    2. Download the latest x64 installer from https://github.com/daynix/UsbDk/releases, install it.
    3. If UsbDk somehow does not work, uninstall UsbDk and follow the libusbK instructions.

    This doesn't interfere with the Microsoft SDK. Do not install both UsbDK and libusbK drivers

  • (Alternatively) Install libusbK driver

    You don't need the Kinect for Windows v2 SDK to build and install libfreenect2, though it doesn't hurt to have it too. You don't need to uninstall the SDK or the driver before doing this procedure.

    Install the libusbK backend driver for libusb. Please follow the steps exactly:

    1. Download Zadig from http://zadig.akeo.ie/.
    2. Run Zadig and in options, check "List All Devices" and uncheck "Ignore Hubs or Composite Parents"
    3. Select the "Xbox NUI Sensor (composite parent)" from the drop-down box. (Important: Ignore the "NuiSensor Adaptor" varieties, which are the adapter, NOT the Kinect) The current driver will list usbccgp. USB ID is VID 045E, PID 02C4 or 02D8.
    4. Select libusbK (v3.0.7.0 or newer) from the replacement driver list.
    5. Click the "Replace Driver" button. Click yes on the warning about replacing a system driver. (This is because it is a composite parent.)

    To uninstall the libusbK driver (and get back the official SDK driver, if installed):

    1. Open "Device Manager"
    2. Under "libusbK USB Devices" tree, right click the "Xbox NUI Sensor (Composite Parent)" device and select uninstall.
    3. Important: Check the "Delete the driver software for this device." checkbox, then click OK.

    If you already had the official SDK driver installed and you want to use it:

    1. In Device Manager, in the Action menu, click "Scan for hardware changes."

    This will enumerate the Kinect sensor again and it will pick up the K4W2 SDK driver, and you should be ready to run KinectService.exe again immediately.

    You can go back and forth between the SDK driver and the libusbK driver very quickly and easily with these steps.

  • Build libusb

    Open a Git shell (GitHub for Windows), or any shell that has access to git.exe and msbuild.exe

    cd depends/
    .\install_libusb_vs2013.cmd
    

    Or install_libusb_vs2015.cmd. If you see some errors, you can always open the cmd files and follow the git commands, and maybe build libusb_201x.sln with Visual Studio by hand. Building with "Win32" is not recommended as it results in lower performance.

  • Install TurboJPEG

    Download from http://sourceforge.net/projects/libjpeg-turbo/files, extract it to c:\libjpeg-turbo64 or depends/libjpeg-turbo64, or anywhere as specified by the environment variable TurboJPEG_ROOT.

  • Install GLFW

    Download from http://www.glfw.org/download.html (64-bit), extract as depends/glfw (rename glfw-3.x.x.bin.WIN64 to glfw), or anywhere as specified by the environment variable GLFW_ROOT.

  • Install OpenCL (optional)
    1. Intel GPU: Download "Intel® SDK for OpenCL™ Applications 2016" from https://software.intel.com/en-us/intel-opencl (requires free registration) and install it.
  • Install CUDA (optional, Nvidia only)
    1. Download CUDA Toolkit and install it. NOTE: CUDA 7.5 does not support Visual Studio 2015.
  • Install OpenNI2 (optional)

    Download OpenNI 2.2.0.33 (x64) from http://structure.io/openni, install it to default locations (C:\Program Files...).

  • Build

    The default installation path is install, you may change it by editing CMAKE_INSTALL_PREFIX.

    mkdir build && cd build
    cmake .. -G "Visual Studio 12 2013 Win64"
    cmake --build . --config RelWithDebInfo --target install
    

    Or -G "Visual Studio 14 2015 Win64".

  • Run the test program: .\install\bin\Protonect.exe, or start debugging in Visual Studio.
  • Test OpenNI2 (optional)

    Copy freenect2-openni2.dll, and other dll files (libusb-1.0.dll, glfw.dll, etc.) in install\bin to C:\Program Files\OpenNI2\Tools\OpenNI2\Drivers. Then run C:\Program Files\OpenNI\Tools\NiViewer.exe. Environment variable LIBFREENECT2_PIPELINE can be set to cl, cuda, etc to specify the pipeline.

Mac OSX

Use your favorite package managers (brew, ports, etc.) to install most if not all dependencies:

  • Make sure these build tools are available: wget, git, cmake, pkg-config. Xcode may provide some of them. Install the rest via package managers.
  • Download libfreenect2 source

    git clone https://github.com/OpenKinect/libfreenect2.git
    cd libfreenect2
    
  • Install dependencies: libusb, GLFW

    brew update
    brew install libusb
    brew tap homebrew/versions
    brew install glfw3
    
  • Install TurboJPEG (optional)

    brew tap homebrew/science
    brew install jpeg-turbo
    
  • Install CUDA (optional): TODO
  • Install OpenNI2 (optional)

    brew install openni2
    export OPENNI2_REDIST=/usr/local/lib/ni2
    export OPENNI2_INCLUDE=/usr/local/include/ni2
    
  • Build

    mkdir build && cd build
    cmake ..
    make
    make install
    
  • Run the test program: ./bin/Protonect
  • Test OpenNI2. make install-openni2 (may need sudo), then run NiViewer. Environment variable LIBFREENECT2_PIPELINE can be set to cl, cuda, etc to specify the pipeline.

Linux

Note: Ubuntu 12.04 is too old to support. Debian jessie may also be too old, and Debian stretch is implied in the following.

  • Download libfreenect2 source

    git clone https://github.com/OpenKinect/libfreenect2.git
    cd libfreenect2
    
  • (Ubuntu 14.04 only) Download upgrade deb files

    cd depends; ./download_debs_trusty.sh
    
  • Install build tools

    sudo apt-get install build-essential cmake pkg-config
    
  • Install libusb. The version must be >= 1.0.20.
    1. (Ubuntu 14.04 only) sudo dpkg -i debs/libusb*deb
    2. (Other) sudo apt-get install libusb-1.0-0-dev
  • Install TurboJPEG
    1. (Ubuntu 14.04 and newer) sudo apt-get install libturbojpeg libjpeg-turbo8-dev
    2. (Debian) sudo apt-get install libturbojpeg0-dev
  • Install OpenGL
    1. (Ubuntu 14.04 only) sudo dpkg -i debs/libglfw3*deb; sudo apt-get install -f; sudo apt-get install libgl1-mesa-dri-lts-vivid (If the last command conflicts with other packages, don't do it.)
    2. (Odroid XU4) OpenGL 3.1 is not supported on this platform. Use cmake -DENABLE_OPENGL=OFF later.
    3. (Other) sudo apt-get install libglfw3-dev
  • Install OpenCL (optional)
    • Intel GPU

      1. (Ubuntu 14.04 only) sudo apt-add-repository ppa:floe/beignet; sudo apt-get update; sudo apt-get install beignet-dev; sudo dpkg -i debs/ocl-icd*deb
      2. (Other) sudo apt-get install beignet-dev
      3. For older kernels, # echo 0 >/sys/module/i915/parameters/enable_cmd_parser is needed. See more known issues at https://www.freedesktop.org/wiki/Software/Beignet/.
    • AMD GPU: Install the latest version of the AMD Catalyst drivers from https://support.amd.com and apt-get install opencl-headers.
    • Mali GPU (e.g. Odroid XU4): (with root) mkdir -p /etc/OpenCL/vendors; echo /usr/lib/arm-linux-gnueabihf/mali-egl/libmali.so >/etc/OpenCL/vendors/mali.icd; apt-get install opencl-headers.
    • Verify: You can install clinfo to verify if you have correctly set up the OpenCL stack.
  • Install CUDA (optional, Nvidia only):
    • (Ubuntu 14.04 only) Download cuda-repo-ubuntu1404...*.deb ("deb (network)") from Nvidia website, follow their installation instructions, including apt-get install cuda which installs Nvidia graphics driver.
    • (Jetson TK1) It is preloaded.
    • (Nvidia/Intel dual GPUs) After apt-get install cuda, use sudo prime-select intel to use Intel GPU for desktop.
    • (Other) Follow Nvidia website's instructions.
  • Install VAAPI (optional, Intel only)
    1. (Ubuntu 14.04 only) sudo dpkg -i debs/{libva,i965}*deb; sudo apt-get install -f
    2. (Other) sudo apt-get install libva-dev libjpeg-dev
    3. Linux kernels 4.1 to 4.3 have performance regression. Use 4.0 and earlier or 4.4 and later (Though Ubuntu kernel 4.2.0-28.33~14.04.1 has backported the fix).
  • Install OpenNI2 (optional)
    1. (Ubuntu 14.04 only) sudo apt-add-repository ppa:deb-rob/ros-trusty && sudo apt-get update (You don't need this if you have ROS repos), then sudo apt-get install libopenni2-dev
    2. (Other) sudo apt-get install libopenni2-dev.
  • Build

    cd ..
    mkdir build && cd build
    cmake .. -DCMAKE_INSTALL_PREFIX=$HOME/freenect2
    make
    make install
    

    You need to specify cmake -Dfreenect2_DIR=$HOME/freenect2/lib/cmake/freenect2 for CMake based third-party application to find libfreenect2.

  • Set up udev rules for device access: sudo cp ../platform/linux/udev/90-kinect2.rules /etc/udev/rules.d/, then replug the Kinect.
  • Run the test program: ./bin/Protonect
  • Run OpenNI2 test (optional): sudo apt-get install openni2-utils && sudo make install-openni2 && NiViewer2. Environment variable LIBFREENECT2_PIPELINE can be set to cl, cuda, etc to specify the pipeline.

IAI Kinect2

Maintainer

  • Thiemo Wiedemeyer <wiedemeyer@cs.uni-bremen.de>, Institute for Artificial Intelligence, University of Bremen

Read this first

Please read this README and the ones of the individual components throughly before asking questions. We get a lot of repeated questions, so when you have a problem, we urge everyone to check the github issues (including closed ones). Your issue is very likely discussed there already.

The goal of this project is to give you a driver and the tools needed to receive data from the Kinect-2 sensor, in a way useful for robotics. You will still need to know how to use ROS to make use of it. Please follow the ROS tutorials. You will also need to learn how to work with point-clouds, or depth-clouds, or images (computer vision) to do useful things with the data.

Note: Please use the GitHub issues for questions and problems regarding the iai_kinect2 package and its components. Do not write emails.

Table of contents

  • Description
  • FAQ
  • Dependencies
  • Install
  • GPU acceleration
    • OpenCL with AMD
    • OpenCL/CUDA with Nvidia
    • OpenCL with Intel
  • Citation
  • Screenshots

Description

This is a collection of tools and libraries for a ROS Interface to the Kinect One (Kinect v2).

It contains:

  • a calibration tool for calibrating the IR sensor of the Kinect One to the RGB sensor and the depth measurements
  • a library for depth registration with OpenCL support
  • the bridge between libfreenect2 and ROS
  • a viewer for the images / point clouds

FAQ

If I have any question or someting is not working, what should I do first?

First you should look at this FAQ and the FAQ from libfreenect2.Secondly, look at issue page from libfreenect2 andthe issue page of iai_kinect2 for similar issues and solutions.

Point clouds are not being published?

Point clouds are only published when the launch file is used. Make sure to start kinect2_bridge with roslaunch kinect2_bridge kinect2_bridge.launch.

Will it work with OpenCV 3.0

Short answer: No.

Long answer: Yes, it is possible to compile this package with OpenCV 3.0, but it will not work.This is because cv_bridge is used, which itself is compiled with OpenCV 2.4.x in ROS Indigo/Jade andlinking against both OpenCV versions is not possible. Working support for OpenCV 3.0 might come with a future ROS release.

kinect2_bridge is not working / crashing, what is wrong?

There are many reasons why kinect2_bridge might not working. The first thing to find out whether the problem is related to kinect2_bridge or libfreenect2.A good tool for testing is Protonect, it is a binary located in libfreenect2/build/bin/Protonect.It uses libfreenect2 directly with a minimal dependency on other libraries, so it is a good tool for the first tests.

Execute:

  • ./Protonect gl to test OpenGL support.
  • ./Protonect cl to test OpenCL support.
  • ./Protonect cpu to test CPU support.

Before running kinect2_bridge please make sure Protonect is working and showing color, depth and ir images.If some of them are black, than there is a problem not related to kinect2_bridge and you should look at the issues from the libfreenect2 GitHub page for help.

If one of them works, try out the one that worked with kinect2_bridge: rosrun kinect2_bridge kinect2_bridge _depth_method:=<opengl|opencl|cpu>.You can also change the registration method with _reg_method:=<cpu|opencl>.

Protonect works fine, but kinect2_bridge is still not working / crashing.

If that is the case, you have to make sure that Protonect uses the same version of libfreenect2 as kinect2_bridge does.To do so, run make and sudo make install in the build folder again. And try out kinect2_bridge again.

cd libfreenect2/build
make & sudo make install

Also make sure that you are not using OpenCV 3.0.

If it is still crashing, compile it in debug and run it with gdb:

cd <catkin_ws>
catkin_make -DCMAKE_BUILD_TYPE="Debug"
cd devel/lib/kinect2_bridge
gdb kinect2_bridge
// inside gdb: run until it crashes and do a backtrace
run
bt
quit

Open an issue and post the problem description and the output from the backtrace (bt).

kinect2_bridge hangs and prints "waiting for clients to connect"

This is the normal behavior. 'kinect2_bridge' will only process data when clients are connected (ROS nodes listening to at least one of the topics).This saves CPU and GPU resources. As soon as you start the kinect_viewer or rostopic hz on one of the topics, processing should start.

rosdep: Cannot locate rosdep definition for [kinect2_bridge] or [kinect2_registration]

rosdep will output errors on not being able to locate [kinect2_bridge] and [kinect2_registration].That is fine because they are all part of the iai_kinect2 package and rosdep does not know these packages.

Protonect or kinect2_bridge outputs [TransferPool::submit] failed to submit transfer

This indicates problems with the USB connection.

I still have an issue, what should I do?

First of all, check the issue pages on GitHub for similar issues, as they might contain solutions for them.By default you will only see the open issues, but if you click on closed you will the the ones solved. There is also a search field which helps to find similar issues.

If you found no solution in the issues, feel free to open a new issue for your problem. Please describe your problem in detail and provide error messages and log output.

Dependencies

  • ROS Hydro/Indigo
  • OpenCV (2.4.x, using the one from the official Ubuntu repositories is recommended)
  • PCL (1.7.x, using the one from the official Ubuntu repositories is recommended)
  • Eigen (optional, but recommended)
  • OpenCL (optional, but recommended)
  • libfreenect2 (>= v0.2.0, for stability checkout the latest stable release)

Install

  1. Install the ROS. Instructions for Ubuntu 14.04
  2. Setup your ROS environment
  3. Install libfreenect2:

    Follow the instructions and enable C++11 by using cmake .. -DENABLE_CXX11=ON instead of cmake ..

    If something is not working, check out the latest stable release, for example git checkout v0.2.0.

  4. Clone this repository into your catkin workspace, install the dependencies and build it:

    cd ~/catkin_ws/src/
    git clone https://github.com/code-iai/iai_kinect2.git
    cd iai_kinect2
    rosdep install -r --from-paths .
    cd ~/catkin_ws
    catkin_make -DCMAKE_BUILD_TYPE="Release"
    

    Note: rosdep will output errors on not being able to locate [kinect2_bridge] and [depth_registration].That is fine because they are all part of the iai_kinect2 package and rosdep does not know these packages.

    Note: If you installed libfreenect2 somewhere else than in $HOME/freenect2 or a standard location like /usr/localyou have to specify the path to it by adding -Dfreenect2_DIR=path_to_freenect2/lib/cmake/freenect2 to catkin_make.

  5. Connect your sensor and run kinect2_bridge:

    roslaunch kinect2_bridge kinect2_bridge.launch
    
  6. Calibrate your sensor using the kinect2_calibration. Further details
  7. Add the calibration files to the kinect2_bridge/data/<serialnumber> folder. Further details
  8. Restart kinect2_bridge and view the results using rosrun kinect2_viewer kinect2_viewer kinect2 sd cloud.

GPU acceleration

OpenCL with AMD

Install the latest version of the AMD Catalyst drivers from https://support.amd.com and follow the instructions. Also install opencl-headers.

sudo apt-get install opencl-headers

OpenCL/CUDA with Nvidia

Go to developer.nvidia.com/cuda-downloads and select linux, x86_64, Ubuntu, 14.04, deb(network).Download the file and follow the instructions. Also install nvidia-modprobe and opencl-headers.

sudo apt-get install nvidia-modprobe opencl-headers

You also need to add CUDA paths to the system environment, add these lines to you ~/.bashrc:

export LD_LIBRARY_PATH="/usr/local/cuda/lib64:${LD_LIBRARY_PATH}"
export PATH="/usr/local/cuda/bin:${PATH}"

A system-wide configuration of the libary path can be created with the following commands:

echo "/usr/local/cuda/lib64" | sudo tee /etc/ld.so.conf.d/cuda.conf
sudo ldconfig

OpenCL with Intel

You can either install a binary package from a PPA like ppa:floe/beignet, or build beignet yourself.It's recommended to use the binary from the PPA.

sudo add-apt-repository ppa:floe/beignet && sudo apt-get update
sudo apt-get install beignet beignet-dev opencl-headers

Citation

If you used iai_kinect2 for your work, please cite it.

@misc{iai_kinect2,author = {Wiedemeyer, Thiemo},title = {{IAI Kinect2}},organization = {Institute for Artificial Intelligence},address = {University Bremen},year = {2014 -- 2015},howpublished = {\url{https://github.com/code-iai/iai\_kinect2}},note = {Accessed June 12, 2015}
}

The result should look something similar to this (may depend on the bibliography style used):

T. Wiedemeyer, “IAI Kinect2,” https://github.com/code-iai/iai_kinect2,
Institute for Artificial Intelligence, University Bremen, 2014 – 2015,
accessed June 12, 2015.

Screenshots

Here are some screenshots from our toolkit:

Ubuntu_ROS中应用kinect v2笔记相关推荐

  1. 关于KINECT V2.0 C++ SDK 基础教程的笔记 EP2

    最近忙着搞老师的任务,没来得及更新点云系列. 目前在做Kinect,在这里接着做个笔记. 原文地址: Kinect Tutorials 这仅仅是做一个笔记以及自己的实际操作记录 关于KINECT V2 ...

  2. Kinect V2开发(5)读关节数据

    Kinect能取得Depth(物体与传感器的距离信息)和BodyIndex(人物索引),基于这些数据可以侦测到人体的骨骼信息并追踪,在Kinect V2的SDK 2.0中,它最多可以同时获取到6个人. ...

  3. ubuntu 16.04 ROS + kinect v2 安装

    参考: ubuntu 16.04 ROS + kinect v2 driver安装方法:安装驱动时遇到的问题及解决方法(1) 以下为参考链接部分内容以及我安装过程中的操作 安装libfreenect2 ...

  4. 【计算机视觉】深度相机(六)--Kinect v2.0 手势样本库制作

    目录为1.如何使用Kinect Studio录制手势剪辑:2.如何使用Visual Gesture Builder创建手势项目:3.如何在我的C#程序中使用手势:4.关于录制.剪辑手势过程中的注意事项 ...

  5. Kinect开发笔记之三Kinect开发环境配置详解

            0.前言:        首先说一下我的开发环境,Visual Studio是2013的,系统是win8的64位版本,SDK是Kinect for windows SDK 1.8版本. ...

  6. 深度相机(六)--Kinect v2.0 手势样本库制作

    目录为1.如何使用Kinect Studio录制手势剪辑:2.如何使用Visual Gesture Builder创建手势项目:3.如何在我的C#程序中使用手势:4.关于录制.剪辑手势过程中的注意事项 ...

  7. Kinect v2保存图像和深度图序列

    上班后的端午节就意味着多一天的假期!!! 本工作的主要出发点是录制数据集,用来供后续的建图和bug重现. 软硬件配置 环境配置如下: 系统:Ubuntu 16.04 LTS  64位 CPU: Int ...

  8. kinect V2 驱动安装说明

    硬件要求 1)系统要求:Ubuntu16.04(或之前的Ubuntu版本,本教程适用Ubuntu16.04版本): 2)显卡要求:NVIDIA 或者 INTEL(现这两者都可以用): 经验总结说明: ...

  9. 深度相机(五)--Kinect v2.0

    原文:http://blog.csdn.net/qq1175421841/article/details/50412994 ----微软Build2012大会:Kinect for Windows P ...

最新文章

  1. 好气啊,面试官不讲武德! | 每日趣闻
  2. python安装软件 No module named setuptools
  3. 使用NodeList
  4. SAP实施项目中顾问与客户的有效沟通
  5. Sun公司因为不懂销售和运营,导致陨落,最终软件还是打败了硬件
  6. 像素包装:在内存中并不以紧密形式排列
  7. JVM真香系列:.java文件到.class文件
  8. Centos6.6升级python2到python3
  9. 自由读写配置文件的艺术[java c++ node](二)
  10. idea启动tomcat出现‘D:\Programfiles‘ 不是内部或外部命令,也不是可运行的程序
  11. leader epoch
  12. 曹祖圣VB.NET视频学习工具
  13. 计算机考研复试重点题目
  14. 信息系统安全导论第六章之软件安全
  15. 浪涌抑制专题-半导体放电管tss介绍
  16. 4-AMBA VIP 编程接口
  17. 三轴点胶机程序 用台达AS228T和威纶触摸屏编写。 注意软件是用台达新款软件ISPSOFT
  18. 自动控制原理->一些内容的概括了解
  19. 2017年计算机应用基础,计算机应用基础考试试题及答案
  20. IOS 12 H5页面无法发送http或https的请求

热门文章

  1. js获取当前时间、年月日、星期几
  2. 淘特1分购/得物/毒物/识货/YoHo有货/源头货源/95分/Nice/Edge/SNKRS/盯潮抢鞋/阿迪达斯/耐克鞋子秒杀抢购软件助手源码分享
  3. 推送至远程仓库流程 (二)
  4. 【数据库】关系数据库与非关系数据库的优缺点汇总
  5. 自由职业者de哪些时间
  6. 【综述】自动机器学习最近研究进展
  7. 怎么控制ERP企业管理系统开发的价格?
  8. 倍福PLC中的EtherCAT与E-Bus的关系
  9. 【SHARE分享】---BDP数据可视化分析神器
  10. CSS outline(轮廓线)