I got you mapped (spatially mapped) Part 1:

Ahh, strategy games. All great rts games need a terrain and we need a great terrain to get started.  We need hills, mountains, valleys, water and more.  Its no secret that I'm not an artist but I've pushed a few pixels around before and I have an idea where to get started. I decided try out the next gen of gaming, or at least more up close and personal version.  Let's try to make it for the HoloLens first and port to other VR systems later. While the device is expensive, the HoloLens has some pretty cool tech. The built in spatial mapping, especially, will be useful. I think this is a great starting point for the game and this adventure.

So, how do we make the living room into a battlefield? The HoloLens build in HPU is designed to processes sensor data which provides vertex, normal and index buffers to the calling application. Data can be polled or collected with even based methods. Cool!  After we ask for the buffers we have to do some reverse engineering to decompose the vertex buffers, the spatial coordinate matrices and rebuild them back to a format we can use. How we use it, is totally up to us. The API is DirectX native which makes things a whole lot simpler.

What does the buffer look like? Well, visually like this:

 Mapped Couch    Couch other side

Its rough. A lot of holes and missing information. Unfortunately that coffee table is not a hover board.  As you can see there is a lot of work to be done.

The Plan:

  1. Strip the buffer down to the basic model space vertex positions
  2. Use the normal buffer to help determine shape directions
  3. Produce a visual output that represents each of the points in the room

We need to be able to test if the data extraction is working and verify that its accurate. This will also provide a great way to test our mesh alignments as we progress.  I've heard this many times before but, any amount of time spent on visualizing data is never a waste!  

The spatial observer produces a IMapView of all the meshes that were created via the SpatialSurfaceiInfo class.  Spatial surface info stores the mesh id with a the necessary buffers. Unfortunately the buffers are already setup to the processed by the GPU which means we have to extract the data back into a vector format.  Each mesh needs to be translated and scaled to make it fit the original room size. While we're not interested in the scaling component at the moment, but we do need the translation matrix. Without the translation matrix, each mesh will be situated in a different world space causing things like the couch to float outside of the room. 

So how do we extract data from the buffer?  The buffer needs to be cast back to a SHORT4* array. The buffer data format is R16G16B16A16IntNormalized. Meaning the data is 2 bytes normalized from 0-255 where 255 is 1.0. 

Traversing the short4 array to create the vertex positions in Vector3 format:

for (int index = 0; index < vertexCount / stride; ++index)
{
      auto ss = rawVertexData;
      rawVertexData++;
      DirectX::XMFLOAT3 byteVec = DirectX::XMFLOAT3(

       (float)( ss->x/ 32767.0f), (float)(ss->y / 32767.0f), (float)(ss->z / 32767.0f)

       );
      tempVerts.push_back(byteVec);
}

As we process the data, we end up with a vector full of vertex position data. This data still needs to be translated by the mesh coordinate system but that is done in our update loop. Because the Camera Space can change and does change by the lense, we have to  reapply the world translations on each update, so a one time translations won't work in this case. To visualize the data, I created a diamond (very simplified sphere) and place one for each vertex position. The shader helps break up the amount of diamonds by providing a color scale based on Y value. This helps break up the numerous dots. 

 

First spatial mapping   SEcond

 Hey, we have walls and ceiling!  But where is the couch, the floor?  Good question. I see other artifacts as well, such as meshes outside of the room. Time to debug, We'll save that for part 2!  

So far a great start. We extracted the data and visualized the points. Hopefully soon we'll have correct alignment and we can begin creating a custom mesh based on the mapped out room. Keep tuned for the next section. Also, notice something? Tons of dots, (7,000 to be exact) and I'm not complaining about the performance?  Early optimization is bad but in this instance, instancing is necessary.  The HoloLens just can't handle too too many draw calls.  Instancing helps keep performance up with one draw call and an instance buffer. But I'll cover that a bit more later on along with stereo rendering and few other shader topics.   

Comments are closed