
Martn
Demmer
4DGS - 4D Gaussian Splat
are 3D Gaussian Splats + Time -> means in movment
I am a bit proud. :) Thanks to my discovery from my last week's investigation, INDI 4DGS is a bit easier now!
​​
In the community of Gaussian Splating the software PostShot is known as a "go to".
On the Discord Server https://discord.gg/ttQrPQ6v you can find active members of the community.
I announced that I had issues achieving with the publicly known and available tools 4DGS.
I approached my test with Reality Capture because I wanted to use markers in addition to the camera alignment.
But sadly, several members in the community were already witnessing the same issue I had!
They tried to achieve consistent results for several frames with the Colmap or the RadienceField Export of the camera positions plus a SparseCloud export in combination with the images.
But somehow it never worked for several frames in a "batch approach"
(you can script the "CLI version of Postshot" yourself or search for Ollie Huttunen´s Batch Trainer.
I searched for help in several directions, and the community is very supportive and friendly, but nobody had found an answer for the RealityCapture road.
I went on and tested and tested and tested and used CoPilot to write scripts, as I am not a coder. Still, I know what I want to achieve, and it looks like I managed to create the right descriptions and tasks for the CoPilot to create several scripts for the post-processing of my home-recorded test data set.
I only used 7x frontal RGB streams out of my 10 x Azure Kinect DK Cams, which I normaly use to work with my Volumetric Video Software by ScannedReality
So the result of this test for sure are not on the level of the big multi cam rig owners like XangleStudio or Infinite-Realities – Capturing Spatial Memories
But I wanted to know if 4DGS splat are possible to achieve by an single creator like me !!!
And YES it is !!!
*.CSV is the solution and here is the result !!!
You may ask yourself why the results of my Gaussian Splat test are so less "clean"-looking than my classic Volumetric Video results.
The reason for the disparity in the results lies in the technical requirements. Achieving aesthetically pleasing Gaussian Splats need a more significant number of cameras and a higher number of angles than the classic Volumetric video approach. The 'clean' result is directly related to your viewing angle.
Numerous ongoing experiments worldwide aim to enhance the quality of 4D Gaussian Splats while reducing the number of cameras and the estimation between viewing angles. In the past, remarkable results were only achieved with 100+ cameras.
​
Lately, it has gone down more and more to 80 to 40 and so on.
​
Still, for amazing results, most say synced global shutter solutions and big rigs are the best way, but some others say already that "if the rolling shutter is fast enough," it can create more or less OK or more than OK results.
​
To achieve Perfect, as I understood, you better stay with the global shutter cam rigs.
​
Such rigs reach fast very high investment costs!
​
Each global shutter cam can quickly reach several thousand euros, and then you need a lens and the sync option (best with some kind of PC remote trigger SDK), cables, network switches, servers to handle the massive data and so on.
​
So, I have learned to handle Gaussian Data Sets, but if it comes to requests to join as a Volumetric Video Artist in a production, I mostly will stay on the road of classic volumetric video. However, I have some ideas about how to take my road to the next level!!!
​
We will see! Stay tuned!
​
I'm at a point where I need to reach out to some of my contacts in the cinema industry. I'm hoping to secure a larger number of cameras for a period of time to test my next ideas. This could be a crucial step in advancing the volumetric storytelling via 4D Gaussian Splat tests.
​
I'm also exploring the potential for collaboration with artistic and technical departments at one or two universities to lead these experiments. If you are interested in such collaboration, I would be thrilled to hear from you (best over LinkedIn). Your input could be a game-changer in this field.
​
This could be a significant step forward. As my investigations are often limited because I finance and conduct my tests entirely myself, your support and collaboration could lead to groundbreaking discoveries in the field of volumetric video and cinema technology.