官方给出的images.txt如下图:

# Image list with two lines of data per image:
# 每张图像数据占两行
#   IMAGE_ID, QW, QX, QY, QZ, TX, TY, TZ, CAMERA_ID, NAME
#   图像ID, QW, QX, QY, QZ, TX, TY, TZ, 相机ID, NAME
#   POINTS2D[] as (X, Y, POINT3D_ID)
#   2D点坐标和对应3D点ID, 若没有对应的3D点则ID标记为-1
# Number of images: 2, mean observations per image: 2
1 0.851773 0.0165051 0.503764 -0.142941 -0.737434 1.02973 3.74354 1 P1180141.JPG
2362.39 248.498 58396 1784.7 268.254 59027 1784.7 268.254 -1
2 0.851773 0.0165051 0.503764 -0.142941 -0.737434 1.02973 3.74354 1 P1180142.JPG
1190.83 663.957 23056 1258.77 640.354 59070

但是我生成却是这样:

苦恼很久,找不到第二张图片信息,才醒悟是关键点信息太多了,以至于txt存储不了这么多。而我们需要的是每张图像的外参。因此,根据COLMAP官方脚本,写了如下代码,生成了如下我们想要的数据。

具体流程如下:

1. 生成image.bin

提取特征 → 特征匹配 → 稀疏重建 → 导出模型 → 得到image.bin

2. 生成image.txt

根据以下代码,只需要改路径,即可image.bin转image.txt,把关键点信息去除:

test.py

from aa import *if __name__ == '__main__':images = read_images_binary('C:/Users/.../images.bin')if len(images) == 0:mean_observations = 0else:mean_observations = sum((len(img.point3D_ids) for _, img in images.items()))/len(images)HEADER = "# Image list with two lines of data per image:\n" + \"#   IMAGE_ID, QW, QX, QY, QZ, TX, TY, TZ, CAMERA_ID, NAME\n" + \"#   POINTS2D[] as (X, Y, POINT3D_ID)\n" + \"# Number of images: {}, mean observations per image: {}\n".format(len(images), mean_observations)with open('C:/Users/.../images.txt', "w") as fid:fid.write(HEADER)for _, img in images.items():image_header = [img.id, *img.qvec, *img.tvec, img.camera_id, img.name]first_line = " ".join(map(str, image_header))fid.write(first_line + "\n")

aa.py (官方代码)

包含了很多方法可以实现其他需求

import os
import collections
import numpy as np
import struct
import argparseCameraModel = collections.namedtuple("CameraModel", ["model_id", "model_name", "num_params"])
Camera = collections.namedtuple("Camera", ["id", "model", "width", "height", "params"])
BaseImage = collections.namedtuple("Image", ["id", "qvec", "tvec", "camera_id", "name", "xys", "point3D_ids"])
Point3D = collections.namedtuple("Point3D", ["id", "xyz", "rgb", "error", "image_ids", "point2D_idxs"])class Image(BaseImage):def qvec2rotmat(self):return qvec2rotmat(self.qvec)CAMERA_MODELS = {CameraModel(model_id=0, model_name="SIMPLE_PINHOLE", num_params=3),CameraModel(model_id=1, model_name="PINHOLE", num_params=4),CameraModel(model_id=2, model_name="SIMPLE_RADIAL", num_params=4),CameraModel(model_id=3, model_name="RADIAL", num_params=5),CameraModel(model_id=4, model_name="OPENCV", num_params=8),CameraModel(model_id=5, model_name="OPENCV_FISHEYE", num_params=8),CameraModel(model_id=6, model_name="FULL_OPENCV", num_params=12),CameraModel(model_id=7, model_name="FOV", num_params=5),CameraModel(model_id=8, model_name="SIMPLE_RADIAL_FISHEYE", num_params=4),CameraModel(model_id=9, model_name="RADIAL_FISHEYE", num_params=5),CameraModel(model_id=10, model_name="THIN_PRISM_FISHEYE", num_params=12)
}
CAMERA_MODEL_IDS = dict([(camera_model.model_id, camera_model)for camera_model in CAMERA_MODELS])
CAMERA_MODEL_NAMES = dict([(camera_model.model_name, camera_model)for camera_model in CAMERA_MODELS])def read_next_bytes(fid, num_bytes, format_char_sequence, endian_character="<"):"""Read and unpack the next bytes from a binary file.:param fid::param num_bytes: Sum of combination of {2, 4, 8}, e.g. 2, 6, 16, 30, etc.:param format_char_sequence: List of {c, e, f, d, h, H, i, I, l, L, q, Q}.:param endian_character: Any of {@, =, <, >, !}:return: Tuple of read and unpacked values."""data = fid.read(num_bytes)return struct.unpack(endian_character + format_char_sequence, data)def write_next_bytes(fid, data, format_char_sequence, endian_character="<"):"""pack and write to a binary file.:param fid::param data: data to send, if multiple elements are sent at the same time,they should be encapsuled either in a list or a tuple:param format_char_sequence: List of {c, e, f, d, h, H, i, I, l, L, q, Q}.should be the same length as the data list or tuple:param endian_character: Any of {@, =, <, >, !}"""if isinstance(data, (list, tuple)):bytes = struct.pack(endian_character + format_char_sequence, *data)else:bytes = struct.pack(endian_character + format_char_sequence, data)fid.write(bytes)def read_cameras_text(path):"""see: src/base/reconstruction.ccvoid Reconstruction::WriteCamerasText(const std::string& path)void Reconstruction::ReadCamerasText(const std::string& path)"""cameras = {}with open(path, "r") as fid:while True:line = fid.readline()if not line:breakline = line.strip()if len(line) > 0 and line[0] != "#":elems = line.split()camera_id = int(elems[0])model = elems[1]width = int(elems[2])height = int(elems[3])params = np.array(tuple(map(float, elems[4:])))cameras[camera_id] = Camera(id=camera_id, model=model,width=width, height=height,params=params)return camerasdef read_cameras_binary(path_to_model_file):"""see: src/base/reconstruction.ccvoid Reconstruction::WriteCamerasBinary(const std::string& path)void Reconstruction::ReadCamerasBinary(const std::string& path)"""cameras = {}with open(path_to_model_file, "rb") as fid:num_cameras = read_next_bytes(fid, 8, "Q")[0]for _ in range(num_cameras):camera_properties = read_next_bytes(fid, num_bytes=24, format_char_sequence="iiQQ")camera_id = camera_properties[0]model_id = camera_properties[1]model_name = CAMERA_MODEL_IDS[camera_properties[1]].model_namewidth = camera_properties[2]height = camera_properties[3]num_params = CAMERA_MODEL_IDS[model_id].num_paramsparams = read_next_bytes(fid, num_bytes=8*num_params,format_char_sequence="d"*num_params)cameras[camera_id] = Camera(id=camera_id,model=model_name,width=width,height=height,params=np.array(params))assert len(cameras) == num_camerasreturn camerasdef write_cameras_text(cameras, path):"""see: src/base/reconstruction.ccvoid Reconstruction::WriteCamerasText(const std::string& path)void Reconstruction::ReadCamerasText(const std::string& path)"""HEADER = "# Camera list with one line of data per camera:\n" + \"#   CAMERA_ID, MODEL, WIDTH, HEIGHT, PARAMS[]\n" + \"# Number of cameras: {}\n".format(len(cameras))with open(path, "w") as fid:fid.write(HEADER)for _, cam in cameras.items():to_write = [cam.id, cam.model, cam.width, cam.height, *cam.params]line = " ".join([str(elem) for elem in to_write])fid.write(line + "\n")def write_cameras_binary(cameras, path_to_model_file):"""see: src/base/reconstruction.ccvoid Reconstruction::WriteCamerasBinary(const std::string& path)void Reconstruction::ReadCamerasBinary(const std::string& path)"""with open(path_to_model_file, "wb") as fid:write_next_bytes(fid, len(cameras), "Q")for _, cam in cameras.items():model_id = CAMERA_MODEL_NAMES[cam.model].model_idcamera_properties = [cam.id,model_id,cam.width,cam.height]write_next_bytes(fid, camera_properties, "iiQQ")for p in cam.params:write_next_bytes(fid, float(p), "d")return camerasdef read_images_text(path):"""see: src/base/reconstruction.ccvoid Reconstruction::ReadImagesText(const std::string& path)void Reconstruction::WriteImagesText(const std::string& path)"""images = {}with open(path, "r") as fid:while True:line = fid.readline()if not line:breakline = line.strip()if len(line) > 0 and line[0] != "#":elems = line.split()image_id = int(elems[0])qvec = np.array(tuple(map(float, elems[1:5])))tvec = np.array(tuple(map(float, elems[5:8])))camera_id = int(elems[8])image_name = elems[9]elems = fid.readline().split()xys = np.column_stack([tuple(map(float, elems[0::3])),tuple(map(float, elems[1::3]))])point3D_ids = np.array(tuple(map(int, elems[2::3])))images[image_id] = Image(id=image_id, qvec=qvec, tvec=tvec,camera_id=camera_id, name=image_name,xys=xys, point3D_ids=point3D_ids)return imagesdef read_images_binary(path_to_model_file):"""see: src/base/reconstruction.ccvoid Reconstruction::ReadImagesBinary(const std::string& path)void Reconstruction::WriteImagesBinary(const std::string& path)"""images = {}with open(path_to_model_file, "rb") as fid:num_reg_images = read_next_bytes(fid, 8, "Q")[0]for _ in range(num_reg_images):binary_image_properties = read_next_bytes(fid, num_bytes=64, format_char_sequence="idddddddi")image_id = binary_image_properties[0]qvec = np.array(binary_image_properties[1:5])tvec = np.array(binary_image_properties[5:8])camera_id = binary_image_properties[8]image_name = ""current_char = read_next_bytes(fid, 1, "c")[0]while current_char != b"\x00":   # look for the ASCII 0 entryimage_name += current_char.decode("utf-8")current_char = read_next_bytes(fid, 1, "c")[0]num_points2D = read_next_bytes(fid, num_bytes=8,format_char_sequence="Q")[0]x_y_id_s = read_next_bytes(fid, num_bytes=24*num_points2D,format_char_sequence="ddq"*num_points2D)xys = np.column_stack([tuple(map(float, x_y_id_s[0::3])),tuple(map(float, x_y_id_s[1::3]))])point3D_ids = np.array(tuple(map(int, x_y_id_s[2::3])))images[image_id] = Image(id=image_id, qvec=qvec, tvec=tvec,camera_id=camera_id, name=image_name,xys=xys, point3D_ids=point3D_ids)return imagesdef write_images_text(images, path):"""see: src/base/reconstruction.ccvoid Reconstruction::ReadImagesText(const std::string& path)void Reconstruction::WriteImagesText(const std::string& path)"""if len(images) == 0:mean_observations = 0else:mean_observations = sum((len(img.point3D_ids) for _, img in images.items()))/len(images)HEADER = "# Image list with two lines of data per image:\n" + \"#   IMAGE_ID, QW, QX, QY, QZ, TX, TY, TZ, CAMERA_ID, NAME\n" + \"#   POINTS2D[] as (X, Y, POINT3D_ID)\n" + \"# Number of images: {}, mean observations per image: {}\n".format(len(images), mean_observations)with open(path, "w") as fid:fid.write(HEADER)for _, img in images.items():image_header = [img.id, *img.qvec, *img.tvec, img.camera_id, img.name]first_line = " ".join(map(str, image_header))fid.write(first_line + "\n")points_strings = []for xy, point3D_id in zip(img.xys, img.point3D_ids):points_strings.append(" ".join(map(str, [*xy, point3D_id])))fid.write(" ".join(points_strings) + "\n")def write_images_binary(images, path_to_model_file):"""see: src/base/reconstruction.ccvoid Reconstruction::ReadImagesBinary(const std::string& path)void Reconstruction::WriteImagesBinary(const std::string& path)"""with open(path_to_model_file, "wb") as fid:write_next_bytes(fid, len(images), "Q")for _, img in images.items():write_next_bytes(fid, img.id, "i")write_next_bytes(fid, img.qvec.tolist(), "dddd")write_next_bytes(fid, img.tvec.tolist(), "ddd")write_next_bytes(fid, img.camera_id, "i")for char in img.name:write_next_bytes(fid, char.encode("utf-8"), "c")write_next_bytes(fid, b"\x00", "c")write_next_bytes(fid, len(img.point3D_ids), "Q")for xy, p3d_id in zip(img.xys, img.point3D_ids):write_next_bytes(fid, [*xy, p3d_id], "ddq")def read_points3D_text(path):"""see: src/base/reconstruction.ccvoid Reconstruction::ReadPoints3DText(const std::string& path)void Reconstruction::WritePoints3DText(const std::string& path)"""points3D = {}with open(path, "r") as fid:while True:line = fid.readline()if not line:breakline = line.strip()if len(line) > 0 and line[0] != "#":elems = line.split()point3D_id = int(elems[0])xyz = np.array(tuple(map(float, elems[1:4])))rgb = np.array(tuple(map(int, elems[4:7])))error = float(elems[7])image_ids = np.array(tuple(map(int, elems[8::2])))point2D_idxs = np.array(tuple(map(int, elems[9::2])))points3D[point3D_id] = Point3D(id=point3D_id, xyz=xyz, rgb=rgb,error=error, image_ids=image_ids,point2D_idxs=point2D_idxs)return points3Ddef read_points3D_binary(path_to_model_file):"""see: src/base/reconstruction.ccvoid Reconstruction::ReadPoints3DBinary(const std::string& path)void Reconstruction::WritePoints3DBinary(const std::string& path)"""points3D = {}with open(path_to_model_file, "rb") as fid:num_points = read_next_bytes(fid, 8, "Q")[0]for _ in range(num_points):binary_point_line_properties = read_next_bytes(fid, num_bytes=43, format_char_sequence="QdddBBBd")point3D_id = binary_point_line_properties[0]xyz = np.array(binary_point_line_properties[1:4])rgb = np.array(binary_point_line_properties[4:7])error = np.array(binary_point_line_properties[7])track_length = read_next_bytes(fid, num_bytes=8, format_char_sequence="Q")[0]track_elems = read_next_bytes(fid, num_bytes=8*track_length,format_char_sequence="ii"*track_length)image_ids = np.array(tuple(map(int, track_elems[0::2])))point2D_idxs = np.array(tuple(map(int, track_elems[1::2])))points3D[point3D_id] = Point3D(id=point3D_id, xyz=xyz, rgb=rgb,error=error, image_ids=image_ids,point2D_idxs=point2D_idxs)return points3Ddef write_points3D_text(points3D, path):"""see: src/base/reconstruction.ccvoid Reconstruction::ReadPoints3DText(const std::string& path)void Reconstruction::WritePoints3DText(const std::string& path)"""if len(points3D) == 0:mean_track_length = 0else:mean_track_length = sum((len(pt.image_ids) for _, pt in points3D.items()))/len(points3D)HEADER = "# 3D point list with one line of data per point:\n" + \"#   POINT3D_ID, X, Y, Z, R, G, B, ERROR, TRACK[] as (IMAGE_ID, POINT2D_IDX)\n" + \"# Number of points: {}, mean track length: {}\n".format(len(points3D), mean_track_length)with open(path, "w") as fid:fid.write(HEADER)for _, pt in points3D.items():point_header = [pt.id, *pt.xyz, *pt.rgb, pt.error]fid.write(" ".join(map(str, point_header)) + " ")track_strings = []for image_id, point2D in zip(pt.image_ids, pt.point2D_idxs):track_strings.append(" ".join(map(str, [image_id, point2D])))fid.write(" ".join(track_strings) + "\n")def write_points3D_binary(points3D, path_to_model_file):"""see: src/base/reconstruction.ccvoid Reconstruction::ReadPoints3DBinary(const std::string& path)void Reconstruction::WritePoints3DBinary(const std::string& path)"""with open(path_to_model_file, "wb") as fid:write_next_bytes(fid, len(points3D), "Q")for _, pt in points3D.items():write_next_bytes(fid, pt.id, "Q")write_next_bytes(fid, pt.xyz.tolist(), "ddd")write_next_bytes(fid, pt.rgb.tolist(), "BBB")write_next_bytes(fid, pt.error, "d")track_length = pt.image_ids.shape[0]write_next_bytes(fid, track_length, "Q")for image_id, point2D_id in zip(pt.image_ids, pt.point2D_idxs):write_next_bytes(fid, [image_id, point2D_id], "ii")def detect_model_format(path, ext):if os.path.isfile(os.path.join(path, "cameras"  + ext)) and \os.path.isfile(os.path.join(path, "images"   + ext)) and \os.path.isfile(os.path.join(path, "points3D" + ext)):print("Detected model format: '" + ext + "'")return Truereturn Falsedef read_model(path, ext=""):# try to detect the extension automaticallyif ext == "":if detect_model_format(path, ".bin"):ext = ".bin"elif detect_model_format(path, ".txt"):ext = ".txt"else:print("Provide model format: '.bin' or '.txt'")returnif ext == ".txt":cameras = read_cameras_text(os.path.join(path, "cameras" + ext))images = read_images_text(os.path.join(path, "images" + ext))points3D = read_points3D_text(os.path.join(path, "points3D") + ext)else:cameras = read_cameras_binary(os.path.join(path, "cameras" + ext))images = read_images_binary(os.path.join(path, "images" + ext))points3D = read_points3D_binary(os.path.join(path, "points3D") + ext)return cameras, images, points3Ddef write_model(cameras, images, points3D, path, ext=".bin"):if ext == ".txt":write_cameras_text(cameras, os.path.join(path, "cameras" + ext))write_images_text(images, os.path.join(path, "images" + ext))write_points3D_text(points3D, os.path.join(path, "points3D") + ext)else:write_cameras_binary(cameras, os.path.join(path, "cameras" + ext))write_images_binary(images, os.path.join(path, "images" + ext))write_points3D_binary(points3D, os.path.join(path, "points3D") + ext)return cameras, images, points3Ddef qvec2rotmat(qvec):return np.array([[1 - 2 * qvec[2]**2 - 2 * qvec[3]**2,2 * qvec[1] * qvec[2] - 2 * qvec[0] * qvec[3],2 * qvec[3] * qvec[1] + 2 * qvec[0] * qvec[2]],[2 * qvec[1] * qvec[2] + 2 * qvec[0] * qvec[3],1 - 2 * qvec[1]**2 - 2 * qvec[3]**2,2 * qvec[2] * qvec[3] - 2 * qvec[0] * qvec[1]],[2 * qvec[3] * qvec[1] - 2 * qvec[0] * qvec[2],2 * qvec[2] * qvec[3] + 2 * qvec[0] * qvec[1],1 - 2 * qvec[1]**2 - 2 * qvec[2]**2]])def rotmat2qvec(R):Rxx, Ryx, Rzx, Rxy, Ryy, Rzy, Rxz, Ryz, Rzz = R.flatK = np.array([[Rxx - Ryy - Rzz, 0, 0, 0],[Ryx + Rxy, Ryy - Rxx - Rzz, 0, 0],[Rzx + Rxz, Rzy + Ryz, Rzz - Rxx - Ryy, 0],[Ryz - Rzy, Rzx - Rxz, Rxy - Ryx, Rxx + Ryy + Rzz]]) / 3.0eigvals, eigvecs = np.linalg.eigh(K)qvec = eigvecs[[3, 0, 1, 2], np.argmax(eigvals)]if qvec[0] < 0:qvec *= -1return qvecdef main():parser = argparse.ArgumentParser(description="Read and write COLMAP binary and text models")parser.add_argument("--input_model", help="path to input model folder")parser.add_argument("--input_format", choices=[".bin", ".txt"],help="input model format", default="")parser.add_argument("--output_model",help="path to output model folder")parser.add_argument("--output_format", choices=[".bin", ".txt"],help="outut model format", default=".txt")args = parser.parse_args()cameras, images, points3D = read_model(path=args.input_model, ext=args.input_format)print("num_cameras:", len(cameras))print("num_images:", len(images))print("num_points3D:", len(points3D))if args.output_model is not None:write_model(cameras, images, points3D, path=args.output_model, ext=args.output_format)if __name__ == "__main__":main()

COLMAP导出相机外参(bin文件转txt文件)相关推荐

  1. 小觅相机的相机标定全家桶(相机IMU,相机内参,相机外参)

    性感帅哥博主在线标定小觅双目相机!!!(亲测有效系列!) 刚刚入手新小觅相机,结果飘出天际,很让人头疼!所以- 话不多说,开始骚操作! mkdir mynt_ws #创建文件夹 cd ~/mynt_w ...

  2. mlcc激光雷达与相机外参标定初体验

    论文阅读模块将分享点云处理,SLAM,三维视觉,高精地图相关的文章.公众号致力于理解三维视觉领域相关内容的干货分享,欢迎各位加入我,我们一起每天一篇文章阅读,开启分享之旅,有兴趣的可联系微信diany ...

  3. 【自动驾驶】31.【相机外参标定】、【相机障碍物后处理】【地面的2D点反投影到3D】的过程对比

    相机的平移向量一般标定到imu坐标系或者车身坐标系,欧拉角 yaw.pitch.roll\color{red}yaw.pitch.rollyaw.pitch.roll是相对于前向相机坐标系的位姿: 前 ...

  4. 【学习总结】激光雷达与相机外参标定:原理与代码1

    2023年2月重要补充 这个代码我个人觉得不好用且坑太多,所以后来换了一个.推荐大家用新的代码. 详见更新的一篇博客总结:[学习总结]激光雷达与相机外参标定:代码(cam_lidar_calibrat ...

  5. cam_lidar_calibration标定速腾激光雷达和单目相机外参

    目录 一.资源链接 二.代码测试 2.1安装依赖 2.2代码下载和修改 2.2.1 optimiser.h文件 2.2.2 feature_extractor.h文件 2.3编译代码 2.4测试数据集 ...

  6. 基于先验时间一致性车道线的IPM相机外参标定

    文章:Online Extrinsic Camera Calibration for Temporally Consistent IPM Using Lane Boundary Observation ...

  7. Halcon 4点单标相机外参

    1. 单标外参使用背景 如果摄像机系统没有变化,只是测量面发生了移动或旋转,此时重标相机外参可以解决问题,这种方法可以解决斜测的问题. 2. 主要函数: vector_to_pose( : : Wor ...

  8. 利用MATLAB将图片转换成coe文件、TXT文件、mif文件、bin文件

    利用MATLAB将图片转换成coe文件.TXT文件.mif文件 利用MATLAB将图片转换成coe文件 利用MATLAB将图片转换成txt文件 利用MATLAB将图片转换成mif文件 利用MATLAB ...

  9. python npy文件_python实现npy格式文件转换为txt文件操作

    如下代码会将npy的格式数据读出,并且输出来到控制台: import numpy as np ##设置全部数据,不输出省略号 import sys np.set_printoptions(thresh ...

最新文章

  1. java中class对象的理解 讲得相当不错 很接地气 引用下
  2. UA MATH571B 试验设计VI 随机效应与混合效应2
  3. VS 工具-选项对话框 调试选项相关学习总结
  4. 【控制】《多无人机协同控制技术》周伟老师-第9章-单无人机目标跟踪飞行控制策略
  5. 码云一个仓库只有一个项目吗_gitee码云完整使用教程(部署与克隆)
  6. 五边形创意画_绝了,自己做吊灯,创意满满!不仅好看还省钱!
  7. java三层架构是不是策略模式,把「策略模式」应用到实际项目中
  8. StringBuilder-C#字符串对象
  9. Spring Cloud 微服务实战系列-Eureka注册中心(一)
  10. 如何让两个线程交替打印整数1-100?你的答案呢?
  11. 手机MODEM 开发(30)--- VoLTE无线功能
  12. Hyperledger Fabric教程(5)-- byfn.sh分析-docker-compose-base.yaml
  13. “微积分7天搞定”学习记录
  14. RLC电阻电容电感基础知识——电感篇
  15. 易飞会计科目表自己外挂表构造语句
  16. php基本变量,PHP-语法及变量基本操作
  17. 在python里是什么意思_在Python中$是什么意思?
  18. PowerBI中使用SVG图标简单介绍
  19. MISRA-C那些事儿
  20. python判断三角形程序_python三角形判定怎么做

热门文章

  1. Sping Cloud Alibaba / netflix
  2. 王学岗性能优化(12)——7z压缩
  3. Linux lp命令(-o选项,-d目标打印机,-p页码范围)原文和部分机器翻译
  4. 三十二、C#中的虚方法、抽象类和抽象方法(里氏替换原则)
  5. ppt合并形状的一些操作
  6. 关于双通道,三通道,四通道内存体系结构你需要知道的事
  7. 大电流dcdc降压芯片20a_宝砾微PL5501电源芯片批量出货:倍思65W车充首发!
  8. Python实现KNN算法(鸢尾花集)
  9. 反向传播算法 Python实现
  10. 如何写好状态机(二)