初始化时需要求出的变量:相机和imu外参r t、重力g、尺度s、陀螺仪和加速度计偏置ba bg。

下面对两种算法初始化的详细步骤进行对比:

求陀螺仪偏置bg

  • 求解公式相同,求解方法不同。公式如下,VI ORB-SLAM使用图优化的方式。

    

Vector3d Optimizer::OptimizeInitialGyroBias(const vector<cv::Mat>& vTwc, const vector<IMUPreintegrator>& vImuPreInt)
{int N = vTwc.size(); if(vTwc.size()!=vImuPreInt.size()) cerr<<"vTwc.size()!=vImuPreInt.size()"<<endl;Matrix4d Tbc = ConfigParam::GetEigTbc();Matrix3d Rcb = Tbc.topLeftCorner(3,3).transpose();// Setup optimizer
    g2o::SparseOptimizer optimizer;g2o::BlockSolverX::LinearSolverType * linearSolver;linearSolver = new g2o::LinearSolverEigen<g2o::BlockSolverX::PoseMatrixType>();g2o::BlockSolverX * solver_ptr = new g2o::BlockSolverX(linearSolver);g2o::OptimizationAlgorithmGaussNewton* solver = new g2o::OptimizationAlgorithmGaussNewton(solver_ptr);//g2o::OptimizationAlgorithmLevenberg* solver = new g2o::OptimizationAlgorithmLevenberg(solver_ptr);
    optimizer.setAlgorithm(solver);// Add vertex of gyro bias, to optimizer graphg2o::VertexGyrBias * vBiasg = new g2o::VertexGyrBias();vBiasg->setEstimate(Eigen::Vector3d::Zero());vBiasg->setId(0);optimizer.addVertex(vBiasg);// Add unary edges for gyro bias vertex//for(std::vector<KeyFrame*>::const_iterator lit=vpKFs.begin(), lend=vpKFs.end(); lit!=lend; lit++)for(int i=0; i<N; i++){// Ignore the first KFif(i==0)continue;const cv::Mat& Twi = vTwc[i-1];    // pose of previous KFMatrix3d Rwci = Converter::toMatrix3d(Twi.rowRange(0,3).colRange(0,3));//Matrix3d Rwci = Twi.rotation_matrix();const cv::Mat& Twj = vTwc[i];        // pose of this KFMatrix3d Rwcj = Converter::toMatrix3d(Twj.rowRange(0,3).colRange(0,3));//Matrix3d Rwcj =Twj.rotation_matrix();const IMUPreintegrator& imupreint = vImuPreInt[i];g2o::EdgeGyrBias * eBiasg = new g2o::EdgeGyrBias();eBiasg->setVertex(0, dynamic_cast<g2o::OptimizableGraph::Vertex*>(optimizer.vertex(0)));// measurement is not used in EdgeGyrBiaseBiasg->dRbij = imupreint.getDeltaR();eBiasg->J_dR_bg = imupreint.getJRBiasg();eBiasg->Rwbi = Rwci*Rcb;eBiasg->Rwbj = Rwcj*Rcb;//eBiasg->setInformation(Eigen::Matrix3d::Identity());eBiasg->setInformation(imupreint.getCovPVPhi().bottomRightCorner(3,3).inverse());optimizer.addEdge(eBiasg);}// It's actualy a linear estimator, so 1 iteration is enough.//optimizer.setVerbose(true);
    optimizer.initializeOptimization();optimizer.optimize(1);g2o::VertexGyrBias * vBgEst = static_cast<g2o::VertexGyrBias*>(optimizer.vertex(0));return vBgEst->estimate();
}

View Code

  • VINS中公式如下。使用LDLT分解,解方程组。

   

void solveGyroscopeBias(map<double, ImageFrame> &all_image_frame, Vector3d* Bgs)
{Matrix3d A;Vector3d b;Vector3d delta_bg;A.setZero();b.setZero();map<double, ImageFrame>::iterator frame_i;map<double, ImageFrame>::iterator frame_j;for (frame_i = all_image_frame.begin(); next(frame_i) != all_image_frame.end(); frame_i++){frame_j = next(frame_i);MatrixXd tmp_A(3, 3);tmp_A.setZero();VectorXd tmp_b(3);tmp_b.setZero();Eigen::Quaterniond q_ij(frame_i->second.R.transpose() * frame_j->second.R);tmp_A = frame_j->second.pre_integration->jacobian.template block<3, 3>(O_R, O_BG);tmp_b = 2 * (frame_j->second.pre_integration->delta_q.inverse() * q_ij).vec();A += tmp_A.transpose() * tmp_A;b += tmp_A.transpose() * tmp_b;}delta_bg = A.ldlt().solve(b);ROS_WARN_STREAM("gyroscope bias initial calibration " << delta_bg.transpose());for (int i = 0; i <= WINDOW_SIZE; i++)Bgs[i] += delta_bg;for (frame_i = all_image_frame.begin(); next(frame_i) != all_image_frame.end( ); frame_i++){frame_j = next(frame_i);frame_j->second.pre_integration->repropagate(Vector3d::Zero(), Bgs[0]);}
}

View Code

求重力加速度g、尺度s和外参平移量T

  • VINS中并没有求外参T,而是在紧耦合优化时以初值为0直接进行优化。两算法求取公式略有不同。VI ORB-SLAM公式如下,使用SVD分解求解方程的解:

    

    

// Solve A*x=B for x=[s,gw] 4x1 vectorcv::Mat A = cv::Mat::zeros(3*(N-2),4,CV_32F);cv::Mat B = cv::Mat::zeros(3*(N-2),1,CV_32F);cv::Mat I3 = cv::Mat::eye(3,3,CV_32F);// Step 2.// Approx Scale and Gravity vector in 'world' frame (first KF's camera frame)for(int i=0; i<N-2; i++){//KeyFrameInit* pKF1 = vKFInit[i];//vScaleGravityKF[i];KeyFrameInit* pKF2 = vKFInit[i+1];KeyFrameInit* pKF3 = vKFInit[i+2];// Delta time between framesdouble dt12 = pKF2->mIMUPreInt.getDeltaTime();double dt23 = pKF3->mIMUPreInt.getDeltaTime();// Pre-integrated measurementscv::Mat dp12 = Converter::toCvMat(pKF2->mIMUPreInt.getDeltaP());cv::Mat dv12 = Converter::toCvMat(pKF2->mIMUPreInt.getDeltaV());cv::Mat dp23 = Converter::toCvMat(pKF3->mIMUPreInt.getDeltaP());// Pose of camera in world framecv::Mat Twc1 = vTwc[i].clone();//pKF1->GetPoseInverse();cv::Mat Twc2 = vTwc[i+1].clone();//pKF2->GetPoseInverse();cv::Mat Twc3 = vTwc[i+2].clone();//pKF3->GetPoseInverse();// Position of camera centercv::Mat pc1 = Twc1.rowRange(0,3).col(3);cv::Mat pc2 = Twc2.rowRange(0,3).col(3);cv::Mat pc3 = Twc3.rowRange(0,3).col(3);// Rotation of camera, Rwccv::Mat Rc1 = Twc1.rowRange(0,3).colRange(0,3);cv::Mat Rc2 = Twc2.rowRange(0,3).colRange(0,3);cv::Mat Rc3 = Twc3.rowRange(0,3).colRange(0,3);// Stack to A/B matrix// lambda*s + beta*g = gammacv::Mat lambda = (pc2-pc1)*dt23 + (pc2-pc3)*dt12;cv::Mat beta = 0.5*I3*(dt12*dt12*dt23 + dt12*dt23*dt23);cv::Mat gamma = (Rc3-Rc2)*pcb*dt12 + (Rc1-Rc2)*pcb*dt23 + Rc1*Rcb*dp12*dt23 - Rc2*Rcb*dp23*dt12 - Rc1*Rcb*dv12*dt12*dt23;lambda.copyTo(A.rowRange(3*i+0,3*i+3).col(0));beta.copyTo(A.rowRange(3*i+0,3*i+3).colRange(1,4));gamma.copyTo(B.rowRange(3*i+0,3*i+3));// Tested the formulation in paper, -gamma. Then the scale and gravity vector is -xx// Debug log//cout<<"iter "<<i<<endl;
    }// Use svd to compute A*x=B, x=[s,gw] 4x1 vector// A = u*w*vt, u*w*vt*x=B// Then x = vt'*winv*u'*B
    cv::Mat w,u,vt;// Note w is 4x1 vector by SVDecomp()// A is changed in SVDecomp() with cv::SVD::MODIFY_A for speed
    cv::SVDecomp(A,w,u,vt,cv::SVD::MODIFY_A);// Debug log//cout<<"u:"<<endl<<u<<endl;//cout<<"vt:"<<endl<<vt<<endl;//cout<<"w:"<<endl<<w<<endl;// Compute winvcv::Mat winv=cv::Mat::eye(4,4,CV_32F);for(int i=0;i<4;i++){if(fabs(w.at<float>(i))<1e-10){w.at<float>(i) += 1e-10;// Test logcerr<<"w(i) < 1e-10, w="<<endl<<w<<endl;}winv.at<float>(i,i) = 1./w.at<float>(i);}// Then x = vt'*winv*u'*Bcv::Mat x = vt.t()*winv*u.t()*B;// x=[s,gw] 4x1 vectordouble sstar = x.at<float>(0);    // scale should be positivecv::Mat gwstar = x.rowRange(1,4);   // gravity should be about ~9.8

View Code

  • VINS求出尺度s、重力加速度g和速度v(这里求出滑窗内每帧的速度v意义在哪里?),并没有求外参的平移,方法为LDLT分解。公式如下:

    

    

bool LinearAlignment(map<double, ImageFrame> &all_image_frame, Vector3d &g, VectorXd &x)
{int all_frame_count = all_image_frame.size();int n_state = all_frame_count * 3 + 3 + 1;MatrixXd A{n_state, n_state};A.setZero();VectorXd b{n_state};b.setZero();map<double, ImageFrame>::iterator frame_i;map<double, ImageFrame>::iterator frame_j;int i = 0;for (frame_i = all_image_frame.begin(); next(frame_i) != all_image_frame.end(); frame_i++, i++){frame_j = next(frame_i);MatrixXd tmp_A(6, 10);tmp_A.setZero();VectorXd tmp_b(6);tmp_b.setZero();double dt = frame_j->second.pre_integration->sum_dt;tmp_A.block<3, 3>(0, 0) = -dt * Matrix3d::Identity();tmp_A.block<3, 3>(0, 6) = frame_i->second.R.transpose() * dt * dt / 2 * Matrix3d::Identity();tmp_A.block<3, 1>(0, 9) = frame_i->second.R.transpose() * (frame_j->second.T - frame_i->second.T) / 100.0;     tmp_b.block<3, 1>(0, 0) = frame_j->second.pre_integration->delta_p + frame_i->second.R.transpose() * frame_j->second.R * TIC[0] - TIC[0];//cout << "delta_p   " << frame_j->second.pre_integration->delta_p.transpose() << endl;tmp_A.block<3, 3>(3, 0) = -Matrix3d::Identity();tmp_A.block<3, 3>(3, 3) = frame_i->second.R.transpose() * frame_j->second.R;tmp_A.block<3, 3>(3, 6) = frame_i->second.R.transpose() * dt * Matrix3d::Identity();tmp_b.block<3, 1>(3, 0) = frame_j->second.pre_integration->delta_v;//cout << "delta_v   " << frame_j->second.pre_integration->delta_v.transpose() << endl;
Matrix<double, 6, 6> cov_inv = Matrix<double, 6, 6>::Zero();//cov.block<6, 6>(0, 0) = IMU_cov[i + 1];//MatrixXd cov_inv = cov.inverse();
        cov_inv.setIdentity();MatrixXd r_A = tmp_A.transpose() * cov_inv * tmp_A;VectorXd r_b = tmp_A.transpose() * cov_inv * tmp_b;A.block<6, 6>(i * 3, i * 3) += r_A.topLeftCorner<6, 6>();b.segment<6>(i * 3) += r_b.head<6>();A.bottomRightCorner<4, 4>() += r_A.bottomRightCorner<4, 4>();b.tail<4>() += r_b.tail<4>();A.block<6, 4>(i * 3, n_state - 4) += r_A.topRightCorner<6, 4>();A.block<4, 6>(n_state - 4, i * 3) += r_A.bottomLeftCorner<4, 6>();}A = A * 1000.0;b = b * 1000.0;x = A.ldlt().solve(b);double s = x(n_state - 1) / 100.0;ROS_DEBUG("estimated scale: %f", s);g = x.segment<3>(n_state - 4);ROS_DEBUG_STREAM(" result g     " << g.norm() << " " << g.transpose());if(fabs(g.norm() - G.norm()) > 1.0 || s < 0){return false;}RefineGravity(all_image_frame, g, x);s = (x.tail<1>())(0) / 100.0;(x.tail<1>())(0) = s;ROS_DEBUG_STREAM(" refine     " << g.norm() << " " << g.transpose());if(s < 0.0 )return false;   elsereturn true;
}

View Code

  这里在求出g之后,还对其进行了优化,方法为LDLT分解,公式如下。

    

void RefineGravity(map<double, ImageFrame> &all_image_frame, Vector3d &g, VectorXd &x)
{Vector3d g0 = g.normalized() * G.norm();Vector3d lx, ly;//VectorXd x;int all_frame_count = all_image_frame.size();int n_state = all_frame_count * 3 + 2 + 1;MatrixXd A{n_state, n_state};A.setZero();VectorXd b{n_state};b.setZero();map<double, ImageFrame>::iterator frame_i;map<double, ImageFrame>::iterator frame_j;for(int k = 0; k < 4; k++){MatrixXd lxly(3, 2);lxly = TangentBasis(g0);int i = 0;for (frame_i = all_image_frame.begin(); next(frame_i) != all_image_frame.end(); frame_i++, i++){frame_j = next(frame_i);MatrixXd tmp_A(6, 9);tmp_A.setZero();VectorXd tmp_b(6);tmp_b.setZero();double dt = frame_j->second.pre_integration->sum_dt;tmp_A.block<3, 3>(0, 0) = -dt * Matrix3d::Identity();tmp_A.block<3, 2>(0, 6) = frame_i->second.R.transpose() * dt * dt / 2 * Matrix3d::Identity() * lxly;tmp_A.block<3, 1>(0, 8) = frame_i->second.R.transpose() * (frame_j->second.T - frame_i->second.T) / 100.0;     tmp_b.block<3, 1>(0, 0) = frame_j->second.pre_integration->delta_p + frame_i->second.R.transpose() * frame_j->second.R * TIC[0] - TIC[0] - frame_i->second.R.transpose() * dt * dt / 2 * g0;tmp_A.block<3, 3>(3, 0) = -Matrix3d::Identity();tmp_A.block<3, 3>(3, 3) = frame_i->second.R.transpose() * frame_j->second.R;tmp_A.block<3, 2>(3, 6) = frame_i->second.R.transpose() * dt * Matrix3d::Identity() * lxly;tmp_b.block<3, 1>(3, 0) = frame_j->second.pre_integration->delta_v - frame_i->second.R.transpose() * dt * Matrix3d::Identity() * g0;Matrix<double, 6, 6> cov_inv = Matrix<double, 6, 6>::Zero();//cov.block<6, 6>(0, 0) = IMU_cov[i + 1];//MatrixXd cov_inv = cov.inverse();
            cov_inv.setIdentity();MatrixXd r_A = tmp_A.transpose() * cov_inv * tmp_A;VectorXd r_b = tmp_A.transpose() * cov_inv * tmp_b;A.block<6, 6>(i * 3, i * 3) += r_A.topLeftCorner<6, 6>();b.segment<6>(i * 3) += r_b.head<6>();A.bottomRightCorner<3, 3>() += r_A.bottomRightCorner<3, 3>();b.tail<3>() += r_b.tail<3>();A.block<6, 3>(i * 3, n_state - 3) += r_A.topRightCorner<6, 3>();A.block<3, 6>(n_state - 3, i * 3) += r_A.bottomLeftCorner<3, 6>();}A = A * 1000.0;b = b * 1000.0;x = A.ldlt().solve(b);VectorXd dg = x.segment<2>(n_state - 3);g0 = (g0 + lxly * dg).normalized() * G.norm();//double s = x(n_state - 1);
    }   g = g0;
}

View Code

求加速度计偏置

  • VINS中并没有计算该值,和外参T一样,是在后面直接进行优化。VI ORB-SLAM中则单独对该值进行了求取,求取方式同样为SVD公式如下:

    

    

   

// Step 3.// Use gravity magnitude 9.8 as constraint// gI = [0;0;1], the normalized gravity vector in an inertial frame, NED type with no orientation.cv::Mat gI = cv::Mat::zeros(3,1,CV_32F);gI.at<float>(2) = 1;// Normalized approx. gravity vecotr in world framecv::Mat gwn = gwstar/cv::norm(gwstar);// Debug log//cout<<"gw normalized: "<<gwn<<endl;// vhat = (gI x gw) / |gI x gw|cv::Mat gIxgwn = gI.cross(gwn);double normgIxgwn = cv::norm(gIxgwn);cv::Mat vhat = gIxgwn/normgIxgwn;double theta = std::atan2(normgIxgwn,gI.dot(gwn));// Debug log//cout<<"vhat: "<<vhat<<", theta: "<<theta*180.0/M_PI<<endl;
Eigen::Vector3d vhateig = Converter::toVector3d(vhat);Eigen::Matrix3d RWIeig = Sophus::SO3::exp(vhateig*theta).matrix();cv::Mat Rwi = Converter::toCvMat(RWIeig);cv::Mat GI = gI*ConfigParam::GetG();//9.8012;// Solve C*x=D for x=[s,dthetaxy,ba] (1+2+3)x1 vectorcv::Mat C = cv::Mat::zeros(3*(N-2),6,CV_32F);cv::Mat D = cv::Mat::zeros(3*(N-2),1,CV_32F);for(int i=0; i<N-2; i++){//KeyFrameInit* pKF1 = vKFInit[i];//vScaleGravityKF[i];KeyFrameInit* pKF2 = vKFInit[i+1];KeyFrameInit* pKF3 = vKFInit[i+2];// Delta time between framesdouble dt12 = pKF2->mIMUPreInt.getDeltaTime();double dt23 = pKF3->mIMUPreInt.getDeltaTime();// Pre-integrated measurementscv::Mat dp12 = Converter::toCvMat(pKF2->mIMUPreInt.getDeltaP());cv::Mat dv12 = Converter::toCvMat(pKF2->mIMUPreInt.getDeltaV());cv::Mat dp23 = Converter::toCvMat(pKF3->mIMUPreInt.getDeltaP());cv::Mat Jpba12 = Converter::toCvMat(pKF2->mIMUPreInt.getJPBiasa());cv::Mat Jvba12 = Converter::toCvMat(pKF2->mIMUPreInt.getJVBiasa());cv::Mat Jpba23 = Converter::toCvMat(pKF3->mIMUPreInt.getJPBiasa());// Pose of camera in world framecv::Mat Twc1 = vTwc[i].clone();//pKF1->GetPoseInverse();cv::Mat Twc2 = vTwc[i+1].clone();//pKF2->GetPoseInverse();cv::Mat Twc3 = vTwc[i+2].clone();//pKF3->GetPoseInverse();// Position of camera centercv::Mat pc1 = Twc1.rowRange(0,3).col(3);cv::Mat pc2 = Twc2.rowRange(0,3).col(3);cv::Mat pc3 = Twc3.rowRange(0,3).col(3);// Rotation of camera, Rwccv::Mat Rc1 = Twc1.rowRange(0,3).colRange(0,3);cv::Mat Rc2 = Twc2.rowRange(0,3).colRange(0,3);cv::Mat Rc3 = Twc3.rowRange(0,3).colRange(0,3);// Stack to C/D matrix// lambda*s + phi*dthetaxy + zeta*ba = psicv::Mat lambda = (pc2-pc1)*dt23 + (pc2-pc3)*dt12;cv::Mat phi = - 0.5*(dt12*dt12*dt23 + dt12*dt23*dt23)*Rwi*SkewSymmetricMatrix(GI);  // note: this has a '-', different to papercv::Mat zeta = Rc2*Rcb*Jpba23*dt12 + Rc1*Rcb*Jvba12*dt12*dt23 - Rc1*Rcb*Jpba12*dt23;cv::Mat psi = (Rc1-Rc2)*pcb*dt23 + Rc1*Rcb*dp12*dt23 - (Rc2-Rc3)*pcb*dt12- Rc2*Rcb*dp23*dt12 - Rc1*Rcb*dv12*dt23*dt12 - 0.5*Rwi*GI*(dt12*dt12*dt23 + dt12*dt23*dt23); // note:  - paperlambda.copyTo(C.rowRange(3*i+0,3*i+3).col(0));phi.colRange(0,2).copyTo(C.rowRange(3*i+0,3*i+3).colRange(1,3)); //only the first 2 columns, third term in dtheta is zero, here compute dthetaxy 2x1.zeta.copyTo(C.rowRange(3*i+0,3*i+3).colRange(3,6));psi.copyTo(D.rowRange(3*i+0,3*i+3));// Debug log//cout<<"iter "<<i<<endl;
    }// Use svd to compute C*x=D, x=[s,dthetaxy,ba] 6x1 vector// C = u*w*vt, u*w*vt*x=D// Then x = vt'*winv*u'*D
    cv::Mat w2,u2,vt2;// Note w2 is 6x1 vector by SVDecomp()// C is changed in SVDecomp() with cv::SVD::MODIFY_A for speed
    cv::SVDecomp(C,w2,u2,vt2,cv::SVD::MODIFY_A);// Debug log//cout<<"u2:"<<endl<<u2<<endl;//cout<<"vt2:"<<endl<<vt2<<endl;//cout<<"w2:"<<endl<<w2<<endl;// Compute winvcv::Mat w2inv=cv::Mat::eye(6,6,CV_32F);for(int i=0;i<6;i++){if(fabs(w2.at<float>(i))<1e-10){w2.at<float>(i) += 1e-10;// Test logcerr<<"w2(i) < 1e-10, w="<<endl<<w2<<endl;}w2inv.at<float>(i,i) = 1./w2.at<float>(i);}// Then y = vt'*winv*u'*Dcv::Mat y = vt2.t()*w2inv*u2.t()*D;double s_ = y.at<float>(0);cv::Mat dthetaxy = y.rowRange(1,3);cv::Mat dbiasa_ = y.rowRange(3,6);Vector3d dbiasa_eig = Converter::toVector3d(dbiasa_);// dtheta = [dx;dy;0]cv::Mat dtheta = cv::Mat::zeros(3,1,CV_32F);dthetaxy.copyTo(dtheta.rowRange(0,2));Eigen::Vector3d dthetaeig = Converter::toVector3d(dtheta);// Rwi_ = Rwi*exp(dtheta)Eigen::Matrix3d Rwieig_ = RWIeig*Sophus::SO3::exp(dthetaeig).matrix();cv::Mat Rwi_ = Converter::toCvMat(Rwieig_);

View Code

转载于:https://www.cnblogs.com/mybrave/p/9564899.html

VI ORB-SLAM初始化与VINS初始化对比(将vi orb-slam初始化方法移植到vins中)相关推荐

  1. 为什么需要权重初始化(weight initialization)?常见的权重初始化方式有哪些?启发式权重初始化的好处?

    为什么需要权重初始化(weight initialization)?常见的权重初始化方式有哪些?启发式权重初始化的好处? 目录 为什么需要权重初始化(weight initialization)?常见 ...

  2. vs使用了未初始化的局部变量怎么解决_C程序为什么要初始化?

    作者:守望,Linux应用开发者,目前在公众号[编程珠玑] 分享Linux/C/C++/数据结构与算法/工具等原创技术文章和学习资源. 前言 什么是初始化?为什么要初始化?静态变量和局部变量的初始化又 ...

  3. python 类初始化参数校验_如何规避python参数的初始化次数?

    我们在内存不足的时候,除了增加内存的可用度,可以进行一个清理内存的初始化操作,当然这种是最后迫不得已的选择.我们在python中也有需要用到初始化的地方,鉴于参数和函数的关系密不可分,本篇我们会简单的 ...

  4. android 初始化类,Android 常用编程技巧/Java类的初始化顺序 (静态变量、静态初始化块、变量、初始化块、构造器)...

    1.设置屏幕的亮度: WindowManager.LayoutParams lp=getWindow().getAttributes(); lp.screenBrightness=1.0f; getW ...

  5. linux初始化TCP服务失败,深入Linux系统追踪TCP初始化

    struct socket { socket_state state;shorttype; unsignedlongflags; struct socket_wq*wq; structfile *fi ...

  6. java的连接 初始化_java类从加载、连接到初始化过程详解

    Java代码在编译后会转化成Java字节码,字节码被类加载器加载到JVM里,JVM执行字节码,最终需要转化成汇编指令在CPU上执行,Java中所使用的并发机制依赖于JVM的实现和CPU的指令. 类加载 ...

  7. linux网口初始化_深入理解Linux网络技术内幕——网络设备初始化

    概述 内核的初始化过程过程中,与网络相关的工作如下所示: 内核引导时执行start_kernel,start_kernel结束之前会调用rest_init,rest_init初始化内核线程init(在 ...

  8. Bean的生命周期行为控制,初始化与销毁bean时执行操作的三种方法

    Bean的生命周期行为控制,初始化与销毁bean时执行操作的三种方法 一.实现Spring的接口 二.XML配置中使用 init-method和destory-method 三.使用@PostCons ...

  9. android app初始化sdk,Android SDK使用系列教程——2.SDK初始化和常用类介绍

    本帖最后由 碎羽 于 2015-6-18 11:36 编辑 上次讲到SDK的下载和导入,这次来讲讲SDK的初始化和常用类的介绍. 一.初始化SDK 初始化SDK,首先要获得对应设备的AppID.App ...

最新文章

  1. 分享5个有趣的 JavaScript 代码片段
  2. linux 环境变量LD_PRELOAD简介 定义优先加载的动态链接库
  3. hdu1166敌兵布阵 树状数组裸题
  4. oralce或sql中join的用法
  5. 阿里开发规范文档_华为阿里等技术专家15年开发经验总结:SSM整合开发实战文档...
  6. 【Elasticsearch】腾讯Elasticsearch海量规模背后的内核优化剖析
  7. 基于JAVA+SpringBoot+Mybatis+MYSQL的销售团队管理系统
  8. PHP第十次实验总结,The Clean Architecture in PHP 读书笔记(十)
  9. 远程视频监控之驱动篇(LED)
  10. Kinect+OpenNI学习笔记之2(获取kinect的颜色图像和深度图像)
  11. 沸腾!阿里又开源了一项自研核心技术!
  12. 打印日历Java代码实现
  13. 电脑如何设置u盘启动,u盘启动项设置方法
  14. 计算机五笔是什么时候学的吗,现在还有人学五笔么
  15. 绝地反击:我的战胜贫困的经历01(转载、整理)
  16. 大学计算机品牌活动总结,【推荐】大学工作总结4篇
  17. 锤子手机浏览器无法加载某个js的问题
  18. Bugku web——秋名山老司机
  19. 【PowerDesigner】Mysql设计工具 16.5破解
  20. GOplot教程-GO柱状图(有点丑)-数据整理问题汇总

热门文章

  1. 电子学习用品商城小程序开发功能需求分析
  2. Mac解压缩软件BetterZip如何选择压缩文件时的压缩格式
  3. 云和恩墨祝各位“女神”节日快乐!
  4. 阿峥教你实现UITableView循环利用
  5. LilyPond教程(5)——钢琴独奏片段 III
  6. 微信小程序学习(一):开发准备、授权与验证
  7. linux7创建vsftp服务器,Centos 7 FTP(vsftp)服务安装及配置
  8. 笔记本电脑只能搜索到部分WiFi
  9. SpringCloud使用Nacos实现配置管理
  10. 会写Python代码的人工智能Kite宣布支持Linux,“程序猿”要失业了?