春节后又要开工啦,加油ヾ(◍°∇°◍)ノ゙

ORB-SLAM2在SLAM界算的上是火遍大江南北,无数更新的版本在github上发布,这么厉害的代码怎么样也得拜读一下。原谅我的脑力有限,一次性是没法读完这整套代码的,当然在当中也会有不理解的地方或者理解错误的地方,如果理解有误,还请批评指正。

ORB-SLAM2源自于ORB-SLAM,为什么有名?因为完善,集合了单目,双目,RGB-D多种模型。更重要的一点是人家开源了,安装很方便,并且做了运行各种大数据库的demo,基本上只要编译通过,什么TUM,KITTI , EuRoc都可以很快的使用,并且心里感觉就是,哇!SLAM很酷炫。

但是话说回来,一般很酷炫的东西要想吃透理解就是一件不是很酷炫的事儿了,可能还有点痛苦,毕竟看代码不像看小说书,不仅逻辑思维要清晰,还要明白一行行代码后的具体意义。一句话,还是得耐住性子才行。

ORB-SLAM2的代码量不小,先要理解大概的结构,在进行细节上的理解。下面的代码来自ORB-SLAM2。这是基本上构建出了SLAM的基本框架。

#ifndef SYSTEM_H
#define SYSTEM_H#include<string>
#include<thread>
#include<opencv2/core/core.hpp>#include "Tracking.h"
#include "FrameDrawer.h"
#include "MapDrawer.h"
#include "Map.h"
#include "LocalMapping.h"
#include "LoopClosing.h"
#include "KeyFrameDatabase.h"
#include "ORBVocabulary.h"
#include "Viewer.h"namespace ORB_SLAM2
{class Viewer;
class FrameDrawer;
class Map;
class Tracking;
class LocalMapping;
class LoopClosing;class System
{
public:// Input sensorenum eSensor{MONOCULAR=0,STEREO=1,RGBD=2};public:// Initialize the SLAM system. It launches the Local Mapping, Loop Closing and Viewer threads.System(const string &strVocFile, const string &strSettingsFile, const eSensor sensor, const bool bUseViewer = true);// Proccess the given stereo frame. Images must be synchronized and rectified.// Input images: RGB (CV_8UC3) or grayscale (CV_8U). RGB is converted to grayscale.// Returns the camera pose (empty if tracking fails).cv::Mat TrackStereo(const cv::Mat &imLeft, const cv::Mat &imRight, const double &timestamp);// Process the given rgbd frame. Depthmap must be registered to the RGB frame.// Input image: RGB (CV_8UC3) or grayscale (CV_8U). RGB is converted to grayscale.// Input depthmap: Float (CV_32F).// Returns the camera pose (empty if tracking fails).cv::Mat TrackRGBD(const cv::Mat &im, const cv::Mat &depthmap, const double &timestamp);// Proccess the given monocular frame// Input images: RGB (CV_8UC3) or grayscale (CV_8U). RGB is converted to grayscale.// Returns the camera pose (empty if tracking fails).cv::Mat TrackMonocular(const cv::Mat &im, const double &timestamp);// This stops local mapping thread (map building) and performs only camera tracking.void ActivateLocalizationMode();// This resumes local mapping thread and performs SLAM again.void DeactivateLocalizationMode();// Returns true if there have been a big map change (loop closure, global BA)// since last call to this functionbool MapChanged();// Reset the system (clear map)void Reset();// All threads will be requested to finish.// It waits until all threads have finished.// This function must be called before saving the trajectory.void Shutdown();// Save camera trajectory in the TUM RGB-D dataset format.// Only for stereo and RGB-D. This method does not work for monocular.// Call first Shutdown()// See format details at: http://vision.in.tum.de/data/datasets/rgbd-datasetvoid SaveTrajectoryTUM(const string &filename);// Save keyframe poses in the TUM RGB-D dataset format.// This method works for all sensor input.// Call first Shutdown()// See format details at: http://vision.in.tum.de/data/datasets/rgbd-datasetvoid SaveKeyFrameTrajectoryTUM(const string &filename);// Save camera trajectory in the KITTI dataset format.// Only for stereo and RGB-D. This method does not work for monocular.// Call first Shutdown()// See format details at: http://www.cvlibs.net/datasets/kitti/eval_odometry.phpvoid SaveTrajectoryKITTI(const string &filename);// TODO: Save/Load functions// SaveMap(const string &filename);// LoadMap(const string &filename);// Information from most recent processed frame// You can call this right after TrackMonocular (or stereo or RGBD)int GetTrackingState();std::vector<MapPoint*> GetTrackedMapPoints();std::vector<cv::KeyPoint> GetTrackedKeyPointsUn();private:// Input sensoreSensor mSensor;// ORB vocabulary used for place recognition and feature matching.ORBVocabulary* mpVocabulary;// KeyFrame database for place recognition (relocalization and loop detection).KeyFrameDatabase* mpKeyFrameDatabase;// Map structure that stores the pointers to all KeyFrames and MapPoints.Map* mpMap;// Tracker. It receives a frame and computes the associated camera pose.// It also decides when to insert a new keyframe, create some new MapPoints and// performs relocalization if tracking fails.Tracking* mpTracker;// Local Mapper. It manages the local map and performs local bundle adjustment.LocalMapping* mpLocalMapper;// Loop Closer. It searches loops with every new keyframe. If there is a loop it performs// a pose graph optimization and full bundle adjustment (in a new thread) afterwards.LoopClosing* mpLoopCloser;// The viewer draws the map and the current camera pose. It uses Pangolin.Viewer* mpViewer;FrameDrawer* mpFrameDrawer;MapDrawer* mpMapDrawer;// System threads: Local Mapping, Loop Closing, Viewer.// The Tracking thread "lives" in the main execution thread that creates the System object.std::thread* mptLocalMapping;std::thread* mptLoopClosing;std::thread* mptViewer;// Reset flagstd::mutex mMutexReset;bool mbReset;// Change mode flagsstd::mutex mMutexMode;bool mbActivateLocalizationMode;bool mbDeactivateLocalizationMode;// Tracking stateint mTrackingState;std::vector<MapPoint*> mTrackedMapPoints;std::vector<cv::KeyPoint> mTrackedKeyPointsUn;std::mutex mMutexState;
};}// namespace ORB_SLAM#endif // SYSTEM_H
#include "System.h"
#include "Converter.h"
#include <thread>
#include <pangolin/pangolin.h>
#include <iomanip>//使用命名空间ORM_SLAM2
namespace ORB_SLAM2
{//System的构造函数,对一些参数进行设定
System::System(const string &strVocFile, const string &strSettingsFile, const eSensor sensor,const bool bUseViewer):mSensor(sensor), mpViewer(static_cast<Viewer*>(NULL)), mbReset(false),mbActivateLocalizationMode(false),mbDeactivateLocalizationMode(false)
{// Output welcome messagecout << endl <<"ORB-SLAM2 Copyright (C) 2014-2016 Raul Mur-Artal, University of Zaragoza." << endl <<"This program comes with ABSOLUTELY NO WARRANTY;" << endl  <<"This is free software, and you are welcome to redistribute it" << endl <<"under certain conditions. See LICENSE.txt." << endl << endl;cout << "Input sensor was set to: ";//系统的完整之处在于此,单目,立体,深度非常完备if(mSensor==MONOCULAR)cout << "Monocular" << endl;else if(mSensor==STEREO)cout << "Stereo" << endl;else if(mSensor==RGBD)cout << "RGB-D" << endl;//Check settings file//读取对应的参数设定的文件,如果运行过ORB-SLAM2就会知道对应的就是相机内参、帧率、基线(双目)//深度阈值,对应ORB Extractor的参数设定,还有Viewer线程的参数设定,XXX.yaml这种文件类型cv::FileStorage fsSettings(strSettingsFile.c_str(), cv::FileStorage::READ);if(!fsSettings.isOpened()){cerr << "Failed to open settings file at: " << strSettingsFile << endl;exit(-1);}//Load ORB Vocabulary//下载对应的词袋模型,对应的是.txt文件类型cout << endl << "Loading ORB Vocabulary. This could take a while..." << endl;mpVocabulary = new ORBVocabulary();bool bVocLoad = mpVocabulary->loadFromTextFile(strVocFile);if(!bVocLoad){cerr << "Wrong path to vocabulary. " << endl;cerr << "Falied to open at: " << strVocFile << endl;exit(-1);}cout << "Vocabulary loaded!" << endl << endl;//Create KeyFrame DatabasempKeyFrameDatabase = new KeyFrameDatabase(*mpVocabulary);//Create the MapmpMap = new Map();//Create Drawers. These are used by the ViewermpFrameDrawer = new FrameDrawer(mpMap);mpMapDrawer = new MapDrawer(mpMap, strSettingsFile);//Initialize the Tracking thread//(it will live in the main thread of execution, the one that called this constructor)//初始化Tracking线程mpTracker = new Tracking(this, mpVocabulary, mpFrameDrawer, mpMapDrawer,mpMap, mpKeyFrameDatabase, strSettingsFile, mSensor);//Initialize the Local Mapping thread and launch//初始化Local Mapping 线程mpLocalMapper = new LocalMapping(mpMap, mSensor==MONOCULAR);mptLocalMapping = new thread(&ORB_SLAM2::LocalMapping::Run,mpLocalMapper);//Initialize the Loop Closing thread and launch//初始化闭环检测线程mpLoopCloser = new LoopClosing(mpMap, mpKeyFrameDatabase, mpVocabulary, mSensor!=MONOCULAR);mptLoopClosing = new thread(&ORB_SLAM2::LoopClosing::Run, mpLoopCloser);//Initialize the Viewer thread and launch//初始化可视化线程if(bUseViewer){mpViewer = new Viewer(this, mpFrameDrawer,mpMapDrawer,mpTracker,strSettingsFile);mptViewer = new thread(&Viewer::Run, mpViewer);mpTracker->SetViewer(mpViewer);}//Set pointers between threadsmpTracker->SetLocalMapper(mpLocalMapper);mpTracker->SetLoopClosing(mpLoopCloser);mpLocalMapper->SetTracker(mpTracker);mpLocalMapper->SetLoopCloser(mpLoopCloser);mpLoopCloser->SetTracker(mpTracker);mpLoopCloser->SetLocalMapper(mpLocalMapper);
}//运行的系统设定为双目
cv::Mat System::TrackStereo(const cv::Mat &imLeft, const cv::Mat &imRight, const double &timestamp)
{if(mSensor!=STEREO){cerr << "ERROR: you called TrackStereo but input sensor was not set to STEREO." << endl;exit(-1);}   // Check mode change{unique_lock<mutex> lock(mMutexMode);if(mbActivateLocalizationMode){mpLocalMapper->RequestStop();// Wait until Local Mapping has effectively stoppedwhile(!mpLocalMapper->isStopped()){usleep(1000);}mpTracker->InformOnlyTracking(true);mbActivateLocalizationMode = false;}if(mbDeactivateLocalizationMode){mpTracker->InformOnlyTracking(false);mpLocalMapper->Release();mbDeactivateLocalizationMode = false;}}// Check reset{unique_lock<mutex> lock(mMutexReset);if(mbReset){mpTracker->Reset();mbReset = false;}}cv::Mat Tcw = mpTracker->GrabImageStereo(imLeft,imRight,timestamp);unique_lock<mutex> lock2(mMutexState);mTrackingState = mpTracker->mState;mTrackedMapPoints = mpTracker->mCurrentFrame.mvpMapPoints;mTrackedKeyPointsUn = mpTracker->mCurrentFrame.mvKeysUn;return Tcw;
}//运行系统设定为深度相机
cv::Mat System::TrackRGBD(const cv::Mat &im, const cv::Mat &depthmap, const double &timestamp)
{if(mSensor!=RGBD){cerr << "ERROR: you called TrackRGBD but input sensor was not set to RGBD." << endl;exit(-1);}    // Check mode change{unique_lock<mutex> lock(mMutexMode);if(mbActivateLocalizationMode){mpLocalMapper->RequestStop();// Wait until Local Mapping has effectively stoppedwhile(!mpLocalMapper->isStopped()){usleep(1000);}mpTracker->InformOnlyTracking(true);mbActivateLocalizationMode = false;}if(mbDeactivateLocalizationMode){mpTracker->InformOnlyTracking(false);mpLocalMapper->Release();mbDeactivateLocalizationMode = false;}}// Check reset{unique_lock<mutex> lock(mMutexReset);if(mbReset){mpTracker->Reset();mbReset = false;}}cv::Mat Tcw = mpTracker->GrabImageRGBD(im,depthmap,timestamp);unique_lock<mutex> lock2(mMutexState);mTrackingState = mpTracker->mState;mTrackedMapPoints = mpTracker->mCurrentFrame.mvpMapPoints;mTrackedKeyPointsUn = mpTracker->mCurrentFrame.mvKeysUn;return Tcw;
}//运行系统设定为单目
cv::Mat System::TrackMonocular(const cv::Mat &im, const double &timestamp)
{if(mSensor!=MONOCULAR){cerr << "ERROR: you called TrackMonocular but input sensor was not set to Monocular." << endl;exit(-1);}// Check mode change{unique_lock<mutex> lock(mMutexMode);if(mbActivateLocalizationMode){mpLocalMapper->RequestStop();// Wait until Local Mapping has effectively stoppedwhile(!mpLocalMapper->isStopped()){usleep(1000);}mpTracker->InformOnlyTracking(true);mbActivateLocalizationMode = false;}if(mbDeactivateLocalizationMode){mpTracker->InformOnlyTracking(false);mpLocalMapper->Release();mbDeactivateLocalizationMode = false;}}// Check reset{unique_lock<mutex> lock(mMutexReset);if(mbReset){mpTracker->Reset();mbReset = false;}}cv::Mat Tcw = mpTracker->GrabImageMonocular(im,timestamp);unique_lock<mutex> lock2(mMutexState);mTrackingState = mpTracker->mState;mTrackedMapPoints = mpTracker->mCurrentFrame.mvpMapPoints;mTrackedKeyPointsUn = mpTracker->mCurrentFrame.mvKeysUn;return Tcw;
}//激活定位模块
void System::ActivateLocalizationMode()
{unique_lock<mutex> lock(mMutexMode);mbActivateLocalizationMode = true;
}//失活定位模块
void System::DeactivateLocalizationMode()
{unique_lock<mutex> lock(mMutexMode);mbDeactivateLocalizationMode = true;
}//地图是否进行修改
bool System::MapChanged()
{static int n=0;int curn = mpMap->GetLastBigChangeIdx();if(n<curn){n=curn;return true;}elsereturn false;
}//重置系统
void System::Reset()
{unique_lock<mutex> lock(mMutexReset);mbReset = true;
}//关闭整个系统
void System::Shutdown()
{mpLocalMapper->RequestFinish();mpLoopCloser->RequestFinish();if(mpViewer){mpViewer->RequestFinish();while(!mpViewer->isFinished())usleep(5000);}// Wait until all thread have effectively stoppedwhile(!mpLocalMapper->isFinished() || !mpLoopCloser->isFinished() || mpLoopCloser->isRunningGBA()){usleep(5000);}if(mpViewer)pangolin::BindToContext("ORB-SLAM2: Map Viewer");
}//如果运行的是TUM数据集,保存其轨迹
void System::SaveTrajectoryTUM(const string &filename)
{cout << endl << "Saving camera trajectory to " << filename << " ..." << endl;if(mSensor==MONOCULAR){cerr << "ERROR: SaveTrajectoryTUM cannot be used for monocular." << endl;return;}vector<KeyFrame*> vpKFs = mpMap->GetAllKeyFrames();sort(vpKFs.begin(),vpKFs.end(),KeyFrame::lId);// Transform all keyframes so that the first keyframe is at the origin.// After a loop closure the first keyframe might not be at the origin.cv::Mat Two = vpKFs[0]->GetPoseInverse();ofstream f;f.open(filename.c_str());f << fixed;// Frame pose is stored relative to its reference keyframe (which is optimized by BA and pose graph).// We need to get first the keyframe pose and then concatenate the relative transformation.// Frames not localized (tracking failure) are not saved.// For each frame we have a reference keyframe (lRit), the timestamp (lT) and a flag// which is true when tracking failed (lbL).list<ORB_SLAM2::KeyFrame*>::iterator lRit = mpTracker->mlpReferences.begin();list<double>::iterator lT = mpTracker->mlFrameTimes.begin();list<bool>::iterator lbL = mpTracker->mlbLost.begin();for(list<cv::Mat>::iterator lit=mpTracker->mlRelativeFramePoses.begin(),lend=mpTracker->mlRelativeFramePoses.end();lit!=lend;lit++, lRit++, lT++, lbL++){if(*lbL)continue;KeyFrame* pKF = *lRit;cv::Mat Trw = cv::Mat::eye(4,4,CV_32F);// If the reference keyframe was culled, traverse the spanning tree to get a suitable keyframe.while(pKF->isBad()){Trw = Trw*pKF->mTcp;pKF = pKF->GetParent();}Trw = Trw*pKF->GetPose()*Two;cv::Mat Tcw = (*lit)*Trw;cv::Mat Rwc = Tcw.rowRange(0,3).colRange(0,3).t();cv::Mat twc = -Rwc*Tcw.rowRange(0,3).col(3);vector<float> q = Converter::toQuaternion(Rwc);f << setprecision(6) << *lT << " " <<  setprecision(9) << twc.at<float>(0) << " " << twc.at<float>(1) << " " << twc.at<float>(2) << " " << q[0] << " " << q[1] << " " << q[2] << " " << q[3] << endl;}f.close();cout << endl << "trajectory saved!" << endl;
}//如果运行的是TUM数据集,保存其关键帧轨迹
void System::SaveKeyFrameTrajectoryTUM(const string &filename)
{cout << endl << "Saving keyframe trajectory to " << filename << " ..." << endl;vector<KeyFrame*> vpKFs = mpMap->GetAllKeyFrames();sort(vpKFs.begin(),vpKFs.end(),KeyFrame::lId);// Transform all keyframes so that the first keyframe is at the origin.// After a loop closure the first keyframe might not be at the origin.//cv::Mat Two = vpKFs[0]->GetPoseInverse();ofstream f;f.open(filename.c_str());f << fixed;for(size_t i=0; i<vpKFs.size(); i++){KeyFrame* pKF = vpKFs[i];// pKF->SetPose(pKF->GetPose()*Two);if(pKF->isBad())continue;cv::Mat R = pKF->GetRotation().t();vector<float> q = Converter::toQuaternion(R);cv::Mat t = pKF->GetCameraCenter();f << setprecision(6) << pKF->mTimeStamp << setprecision(7) << " " << t.at<float>(0) << " " << t.at<float>(1) << " " << t.at<float>(2)<< " " << q[0] << " " << q[1] << " " << q[2] << " " << q[3] << endl;}f.close();cout << endl << "trajectory saved!" << endl;
}//如果运行的是KITTI数据集,保存其轨迹
void System::SaveTrajectoryKITTI(const string &filename)
{cout << endl << "Saving camera trajectory to " << filename << " ..." << endl;if(mSensor==MONOCULAR){cerr << "ERROR: SaveTrajectoryKITTI cannot be used for monocular." << endl;return;}vector<KeyFrame*> vpKFs = mpMap->GetAllKeyFrames();sort(vpKFs.begin(),vpKFs.end(),KeyFrame::lId);// Transform all keyframes so that the first keyframe is at the origin.// After a loop closure the first keyframe might not be at the origin.cv::Mat Two = vpKFs[0]->GetPoseInverse();ofstream f;f.open(filename.c_str());f << fixed;// Frame pose is stored relative to its reference keyframe (which is optimized by BA and pose graph).// We need to get first the keyframe pose and then concatenate the relative transformation.// Frames not localized (tracking failure) are not saved.// For each frame we have a reference keyframe (lRit), the timestamp (lT) and a flag// which is true when tracking failed (lbL).list<ORB_SLAM2::KeyFrame*>::iterator lRit = mpTracker->mlpReferences.begin();list<double>::iterator lT = mpTracker->mlFrameTimes.begin();for(list<cv::Mat>::iterator lit=mpTracker->mlRelativeFramePoses.begin(), lend=mpTracker->mlRelativeFramePoses.end();lit!=lend;lit++, lRit++, lT++){ORB_SLAM2::KeyFrame* pKF = *lRit;cv::Mat Trw = cv::Mat::eye(4,4,CV_32F);while(pKF->isBad()){//  cout << "bad parent" << endl;Trw = Trw*pKF->mTcp;pKF = pKF->GetParent();}Trw = Trw*pKF->GetPose()*Two;cv::Mat Tcw = (*lit)*Trw;cv::Mat Rwc = Tcw.rowRange(0,3).colRange(0,3).t();cv::Mat twc = -Rwc*Tcw.rowRange(0,3).col(3);f << setprecision(9) << Rwc.at<float>(0,0) << " " << Rwc.at<float>(0,1)  << " " << Rwc.at<float>(0,2) << " "  << twc.at<float>(0) << " " <<Rwc.at<float>(1,0) << " " << Rwc.at<float>(1,1)  << " " << Rwc.at<float>(1,2) << " "  << twc.at<float>(1) << " " <<Rwc.at<float>(2,0) << " " << Rwc.at<float>(2,1)  << " " << Rwc.at<float>(2,2) << " "  << twc.at<float>(2) << endl;}f.close();cout << endl << "trajectory saved!" << endl;
}int System::GetTrackingState()
{unique_lock<mutex> lock(mMutexState);return mTrackingState;
}vector<MapPoint*> System::GetTrackedMapPoints()
{unique_lock<mutex> lock(mMutexState);return mTrackedMapPoints;
}vector<cv::KeyPoint> System::GetTrackedKeyPointsUn()
{unique_lock<mutex> lock(mMutexState);return mTrackedKeyPointsUn;
}} //namespace ORB_SLAM

从上面的代码可以看出其实针对不同的模型,使用的接口还是有差异的。从代码中可以看出:

(1)主线程:Tracking线程就是在主线程上 这一部分主要工作是从图像中提取ORB特征,根据上一帧进行姿态估计,或者进行通过全局重定位初始化位姿,然后跟踪已经重建的局部地图,优化位姿,再根据一些规则确定新的关键帧。

(2)Local mappng线程 这一部分主要完成局部地图构建。包括对关键帧的插入,验证最近生成的地图点并进行筛选,然后生成新的地图点,使用局部捆集调整(Local BA),最后再对插入的关键帧进行筛选,去除多余的关键帧。

(3)Loop closing线程 这一部分主要分为两个过程,分别是闭环探测和闭环校正。闭环检测先使用WOB进行探测,然后通过Sim3算法计算相似变换。闭环校正,主要是闭环融合和Essential Graph的图优化。

(4)Viewer线程 对估计的位姿和特征点进行可视化显示

参考链接:
http://blog.csdn.net/u010128736/article/details/53157605

ORB-SLAM2的源码阅读(一):系统的整体构建相关推荐

  1. Spring源码阅读笔记(一):整体架构与核心技术

    本篇的主要是根据Spring的官方文档加以整理,旨在理解Spring的整体架构与核心技术的基本概念,建立Spring的基本模型. 1. Spring整体架构 Spring框架是一种分层架构,它包含了一 ...

  2. PHP yii 框架源码阅读(二) - 整体执行流程分析

    转载链接:http://tech.ddvip.com/2013-11/1384432766205970.html 一  程序入口 <?php// change the following pat ...

  3. java计算机毕业设计个人阅读习惯个性化推荐系统研究源码+mysql数据库+系统+lw文档+部署

    java计算机毕业设计个人阅读习惯个性化推荐系统研究源码+mysql数据库+系统+lw文档+部署 java计算机毕业设计个人阅读习惯个性化推荐系统研究源码+mysql数据库+系统+lw文档+部署 本源 ...

  4. 美团开源分布式ID生成系统——Leaf源码阅读笔记(Leaf的号段模式)

    Leaf 最早期需求是各个业务线的订单ID生成需求.在美团早期,有的业务直接通过DB自增的方式生成ID,有的业务通过redis缓存来生成ID,也有的业务直接用UUID这种方式来生成ID.以上的方式各自 ...

  5. ORB-SLAM2源码阅读(1)

    ORB-SLAM2源码阅读(1) ORB-SLAM2 的代码以结构清晰,注释完整,易于理解著称,最好先阅读一下论文再来读代码,网上有很多大神已经翻译好了,个人推荐这位的ORB-SLAM2全文翻译 系统 ...

  6. webpack源码阅读——npm脚本运行webpack与命令行输入webpack的区别

    原文地址:webpack源码阅读--npm脚本执行webpack与命令行输入webpack执行的区别 如有错误,欢迎指正! webpack是目前被大家广为使用的模块打包器.从命令行输入webpack或 ...

  7. 【Dubbo源码阅读系列】之远程服务调用(上)

    今天打算来讲一讲 Dubbo 服务远程调用.笔者在开始看 Dubbo 远程服务相关源码的时候,看的有点迷糊.后来慢慢明白 Dubbo 远程服务的调用的本质就是动态代理模式的一种实现.本地消费者无须知道 ...

  8. android源码阅读笔记1-配置源码路径/阅读源码方法讨论

    开始之前 android studio中配置android源码路径 android studio中有源码的路径,你只需要打开SDK Manager下载源码然后重启android studio即可查看源 ...

  9. TiDB 源码阅读系列文章(十九)tikv-client(下)

    上篇文章 中,我们介绍了数据读写过程中 tikv-client 需要解决的几个具体问题,本文将继续介绍 tikv-client 里的两个主要的模块--负责处理分布式计算的 copIterator 和执 ...

  10. 代码分析:NASM源码阅读笔记

    NASM源码阅读笔记 NASM(Netwide Assembler)的使用文档和代码间的注释相当齐全,这给阅读源码 提供了很大的方便.按作者的说法,这是一个模块化的,可重用的x86汇编器, 而且能够被 ...

最新文章

  1. 张量功率谱CAMB参数调试
  2. Python学习之For训练
  3. server如何调用 thrift_一文带你了解 Thrift,一个可伸缩的跨语言 RPC 框架(pinpoint 源码分析系列)...
  4. 洛谷——P1101 单词方阵
  5. 【3】C++语法与数据结构之MFC_CList学生管理系统_链表外排序_函数指针
  6. Opencv中Mat的data数据只定义为uchar*类型,
  7. 作者:谭昊翔(1990-),男,华南师范大学计算机学院硕士生。
  8. 线性表的顺序表示以及实现
  9. 获得BootstrapTable行号index
  10. 『中级篇』手动建立一个base Image(14)
  11. 不了解沙特,那你就看不懂硅谷
  12. GIS和开源见解(摘录)
  13. PostgreSQL与mysql语法不同
  14. [转贴]给想立志入行网络或已经初入行的朋友的建议(一)
  15. C语言已知三边求三角形的面积
  16. 在东北老家坐长途车的遭遇
  17. input只能输入英文
  18. uefi怎么念_UEFI是什么,看完您就全明白了
  19. [case3]聊聊系统设计中的trade-off
  20. 用vb.net开发的简易(通用)上位机

热门文章

  1. 驴妈妈目的地运营集团喜获“红珊瑚”奖
  2. LabVIEW 图形化显示数据
  3. python实现双色球_python中双色球
  4. Android/JAVA 针对时间戳转今天昨天前天,以及刚刚,几分钟的显示
  5. for...in 和 for...of 的区别
  6. 【复盘】2021年度总结
  7. 服务器读取excel文件,关于c#:作为服务器进程读取Excel文件
  8. 给计算机老师的一封赞美信,写给计算机老师的感谢信
  9. 六边形战斗力满格,高效数据采集从这里开始!
  10. wpl计算方法_【数据结构】树的应用-计算哈夫曼树的WPL值