3DGS本身支持對 Blender 數據集的訓練,其主要數據格式為:
<location>
|---images
| |---<image 0>
| |---<image 1>
| |---...
|---transformers_train.json
|---point3d.ply
數據準備#
通常我們自己採集的數據集來自激光掃描裝置,提供了 las 格式的點雲文件、json 格式的相機位姿以及圖片。
首先處理 las 格式的點雲文件,需要將其轉化為二進制的 ply 文件。
然後查看 transformers_train.json 文件,主要是針對性修改 3dgs 代碼文件下 /sence/dataset_readers.py 這個文件。
def readNerfSyntheticInfo(path, white_background, eval, extension=".jpg"):
print("Reading Training Transforms")
train_cam_infos = readCamerasFromTransforms(path, "transforms_train.json", white_background, extension)
#print("Reading Test Transforms")
#test_cam_infos = readCamerasFromTransforms(path, "transforms_test.json", white_background, extension)
#if not eval:
# train_cam_infos.extend(test_cam_infos)
# test_cam_infos = []
test_cam_infos = []
nerf_normalization = getNerfppNorm(train_cam_infos)
ply_path = os.path.join(path, "points3d.ply")
if not os.path.exists(ply_path):
# Since this data set has no colmap data, we start with random points
num_pts = 100_000
print(f"Generating random point cloud ({num_pts})...")
# We create random points inside the bounds of the synthetic Blender scenes
xyz = np.random.random((num_pts, 3)) * 2.6 - 1.3
shs = np.random.random((num_pts, 3)) / 255.0
pcd = BasicPointCloud(points=xyz, colors=SH2RGB(shs), normals=np.zeros((num_pts, 3)))
storePly(ply_path, xyz, SH2RGB(shs) * 255)
try:
pcd = fetchPly(ply_path)
except:
pcd = None
scene_info = SceneInfo(point_cloud=pcd,
train_cameras=train_cam_infos,
test_cameras=test_cam_infos,
nerf_normalization=nerf_normalization,
ply_path=ply_path)
return scene_info
註釋掉了 Test Transforms 相關的部分,不會進行 eval,因此不需要 test。修改了 extension,主要是看自己的圖像輸入格式。
def readCamerasFromTransforms(path, transformsfile, white_background, extension=".jpg"):
cam_infos = []
with open(os.path.join(path, transformsfile)) as json_file:
contents = json.load(json_file)
fovx = contents["camera_angle_x"]
frames = contents["frames"]
for idx, frame in enumerate(frames):
cam_name = os.path.join(path,"images", frame["file_path"])
# NeRF 'transform_matrix' is a camera-to-world transform
c2w = np.array(frame["transform_matrix"])
# change from OpenGL/Blender camera axes (Y up, Z back) to COLMAP (Y down, Z forward)
c2w[:3, 1:3] *= -1
# get the world-to-camera transform and set R, T
w2c = np.linalg.inv(c2w)
R = np.transpose(w2c[:3,:3]) # R is stored transposed due to 'glm' in CUDA code
T = w2c[:3, 3]
image_path = os.path.join(path, cam_name)
image_name = Path(cam_name).stem
image = Image.open(image_path)
im_data = np.array(image.convert("RGBA"))
bg = np.array([1,1,1]) if white_background else np.array([0, 0, 0])
norm_data = im_data / 255.0
arr = norm_data[:,:,:3] * norm_data[:, :, 3:4] + bg * (1 - norm_data[:, :, 3:4])
image = Image.fromarray(np.array(arr*255.0, dtype=np.byte), "RGB")
fovy = focal2fov(fov2focal(fovx, image.size[0]), image.size[1])
FovY = fovy
FovX = fovx
cam_infos.append(CameraInfo(uid=idx, R=R, T=T, FovY=FovY, FovX=FovX, image=image,
image_path=image_path, image_name=image_name, width=image.size[0], height=image.size[1]))
return cam_infos
這裡通常根據自己的 json 文件內容更改,幾個主要需要注意的點是:
-
fovx = contents["camera_angle_x"]
-
cam_name = os.path.join(path,"images", frame["file_path"])
def fetchPly(path):
plydata = PlyData.read(path)
vertices = plydata['vertex']
positions = np.vstack([vertices['x'], vertices['y'], vertices['z']]).T
colors = np.vstack([vertices['red'], vertices['green'], vertices['blue']]).T / 255.0
normals = np.vstack([0,0,0]).T
return BasicPointCloud(points=positions, colors=colors, normals=normals)
這部分是 ply 文件相關的,通常自採集數據集不會有點雲法向量 normals,因此我們根據其 colmap 部分的代碼,也將其設置為 0。
一般這樣改完,代碼就能跑通了,激光點雲結合 GPS 高精度相機位姿,能夠解決 colmap 無法獲取相機位姿的問題。