banner
Fr4nk

Hello! Fr4nk

瞎折腾第一名🥇

3DGS训练类blender数据集

3DGS本身支持对 Blender 数据集的训练,其主要数据格式为:

<location>
|---images
|   |---<image 0>
|   |---<image 1>
|   |---...
|---transformers_train.json
|---point3d.ply

数据准备#

通常我们自己采集的数据集来自激光扫描装置,提供了 las 格式的点云文件、json 格式的相机位姿以及图片。

首先处理 las 格式的点云文件,需要将其转化为二进制的 ply 文件。

然后查看 transformers_train.json 文件,主要是针对性修改 3dgs 代码文件下 /sence/dataset_readers.py 这个文件。

def readNerfSyntheticInfo(path, white_background, eval, extension=".jpg"):
    print("Reading Training Transforms")
    train_cam_infos = readCamerasFromTransforms(path, "transforms_train.json", white_background, extension)
    #print("Reading Test Transforms")
    #test_cam_infos = readCamerasFromTransforms(path, "transforms_test.json", white_background, extension)
    
    #if not eval:
    #    train_cam_infos.extend(test_cam_infos)
    #    test_cam_infos = []
    test_cam_infos = []
    nerf_normalization = getNerfppNorm(train_cam_infos)

    ply_path = os.path.join(path, "points3d.ply")
    if not os.path.exists(ply_path):
        # Since this data set has no colmap data, we start with random points
        num_pts = 100_000
        print(f"Generating random point cloud ({num_pts})...")
        
        # We create random points inside the bounds of the synthetic Blender scenes
        xyz = np.random.random((num_pts, 3)) * 2.6 - 1.3
        shs = np.random.random((num_pts, 3)) / 255.0
        pcd = BasicPointCloud(points=xyz, colors=SH2RGB(shs), normals=np.zeros((num_pts, 3)))

        storePly(ply_path, xyz, SH2RGB(shs) * 255)
    try:
        pcd = fetchPly(ply_path)
    except:
        pcd = None

    scene_info = SceneInfo(point_cloud=pcd,
                           train_cameras=train_cam_infos,
                           test_cameras=test_cam_infos,
                           nerf_normalization=nerf_normalization,
                           ply_path=ply_path)
    return scene_info

注释掉了 Test Transforms 相关的部分,不会进行 eval,因此不需要 test。修改了 extension,主要是看自己的图像输入格式。

def readCamerasFromTransforms(path, transformsfile, white_background, extension=".jpg"):
    cam_infos = []

    with open(os.path.join(path, transformsfile)) as json_file:
        contents = json.load(json_file)
        fovx = contents["camera_angle_x"]

        frames = contents["frames"]
        for idx, frame in enumerate(frames):
            cam_name = os.path.join(path,"images", frame["file_path"])

            # NeRF 'transform_matrix' is a camera-to-world transform
            c2w = np.array(frame["transform_matrix"])
            # change from OpenGL/Blender camera axes (Y up, Z back) to COLMAP (Y down, Z forward)
            c2w[:3, 1:3] *= -1

            # get the world-to-camera transform and set R, T
            w2c = np.linalg.inv(c2w)
            R = np.transpose(w2c[:3,:3])  # R is stored transposed due to 'glm' in CUDA code
            T = w2c[:3, 3]

            image_path = os.path.join(path, cam_name)
            image_name = Path(cam_name).stem
            image = Image.open(image_path)

            im_data = np.array(image.convert("RGBA"))

            bg = np.array([1,1,1]) if white_background else np.array([0, 0, 0])

            norm_data = im_data / 255.0
            arr = norm_data[:,:,:3] * norm_data[:, :, 3:4] + bg * (1 - norm_data[:, :, 3:4])
            image = Image.fromarray(np.array(arr*255.0, dtype=np.byte), "RGB")

            fovy = focal2fov(fov2focal(fovx, image.size[0]), image.size[1])
            FovY = fovy 
            FovX = fovx

            cam_infos.append(CameraInfo(uid=idx, R=R, T=T, FovY=FovY, FovX=FovX, image=image,
                            image_path=image_path, image_name=image_name, width=image.size[0], height=image.size[1]))
            
    return cam_infos

这里通常根据自己的 json 文件内容更改,几个主要需要注意的点是:

  • fovx = contents["camera_angle_x"]

  • cam_name = os.path.join(path,"images", frame["file_path"])

def fetchPly(path):
    plydata = PlyData.read(path)
    vertices = plydata['vertex']
    positions = np.vstack([vertices['x'], vertices['y'], vertices['z']]).T
    colors = np.vstack([vertices['red'], vertices['green'], vertices['blue']]).T / 255.0
    normals = np.vstack([0,0,0]).T
    return BasicPointCloud(points=positions, colors=colors, normals=normals)

这部分是 ply 文件相关的,通常自采集数据集不会有点云法向量 normals,因此我们根据其 colmap 部分的代码,也将其设置为 0。

一般这样改完,代码就能跑通了,激光点云结合 GPS 高精度相机位姿,能够解决 colmap 无法获取相机位姿的问题。

加载中...
此文章数据所有权由区块链加密技术和智能合约保障仅归创作者所有。