SteamVR Tracking License Recap and Answering Questions

SteamVR Tracking License Recap and Answering Questions

Review

First of all, really sorry about the delay here between hardware posts.  This one was stuck in approval for a while and made its way around, ultimately receiving the go-ahead from Valve.  So while it may be a long time coming, hopefully this in-depth review of content will really help those of you interested in the design of SteamVR tracked objects better understand the nuances of sensor placement.  Expect to see a good deal more hardware posts in the coming weeks (I’ve got quite a backlog to push out!)

So again, the training course was great.  As a launchpad, if you want to read over my full experience of it, here are links to Day 1, Day 2, and Day 3 of the training, as well as the category page for the tracking license.

Reviewing over all the content that was learned and experiences had, I’d have to say that the most important takeaway from the course was understanding and optimizing sensor placement.  The math for it is just beautiful.  Below I’ll go ahead and break down what each additional sensor adds to tracking stability:

0 Sensors (Basestation Protocol)
Well, all you have right now is a space that has lasers sweeping through it.  The protocol a single basestation emits goes as follows: an array of infra-red LEDs pulses for a specific length of time.  Then, a vertical fan of infrared light is scanned horizontally through the space, at a field-of-view of 120 degrees, rate of 60 Hz.  After another pulse, a horizontal fan of the same characteristics scanning vertically then follows.  This process is then repeated so long as the basestations have power!

1 Sensor (X- & Y-Position)
Your first sensor will give you an x and y radial coordinate.  So if you think about how a single sensor sees a basestation, it sees the sync pulse and starts its timer.  Then, once contacted by the first sweep, the time signature of this event is saved, and our sensor knows it exists somewhere within the area of that vertical fan at that specific time.  After the second sweep arrives, our sensor can further narrow down its position to be where the time signature positioning of the horizontal and vertical sweeps overlap.  Visualizing this, we now have a single line of potential locations radiating from the basestation.  Our radial x and y coordinates have been established, but our sensor can still reside anywhere on the radial z-axis.

2 Sensors (Z-Rotation)
Next we add a second sensor.  The benefit of this one is that by knowing the x- and y-position of each sensor (which are of a fixed placement apart), we get an understanding for how our object is oriented along the z-axis.  And let me say here that these (and all future) sensors need to be rigidly attached!  Without that, our system will simply know the radial z position for each sensor but no reference for where each sensor is to one another, and therefore not be able to lock down any more pose information.  But a problem still presents itself with two sensors on the object: each sensor could still exist anywhere on that radial z-axis.  Because of that, our object could be anywhere from extremely distant and facing normal to the basestation, or extremely near and severely angled to the basestation.  The sensors would still be hit by the laser sweeps at the same times.  This is referred to as pose translation error.

3 Sensors (Z-Position)
On to a third sensor, let’s first imagine three sensors together in a line.  We’ll still see the same pose translation error as described above; each sensor will still be proportionally related as the angle increases and position grows nearer.  To mediate this, let’s attach that third sensor to be non-collinear to the other two.  Now, when our object angles towards the basestation and gets closer, we would expect to see that third non-collinear point to have a more pronounced offset from the other two.  Using this understanding, 3 sensors can finally lock down our z-position.  Hooray!  But we’re not out of the woods yet.  No matter how precise they’re manufactured, basestations will inherently have a small (VERY small) amount of wobble to them, as well as there being IR-distortions inside the tracking volume (such as heat waves).  Because of this minor error, the current system is not precise enough to discern between 3 sensors that are facing head-on and 3 sensors that are skewed by a few degrees along the x- and y-axes.  This is referred to as pose rotation error.

4 Sensors (X- & Y-Rotation)
So now to the last required sensor; number four.  If our fourth sensor resided in the same plane as the other three, then we would have the same issue with pose rotation error.  To get around that, we pull it out of the plane.  With four non-coplanar sensors, this helps because as an object rotates now, we get the parallax of angular motion, and that parallax really helps eliminate a wide range of possible poses that were present from the inherent basestation error.  We refer to that as establishing the baseline.

>4 Sensors (Occlusion Protection & Redundancy)
Great!  We have a reliable motion-captured object that SteamVR will be happy to use!  But of course the issue still exists of turning the device away / occluding the sensors from a basestation.  For this, we need to be sure that all angles and occlusions are accounted for.  That’s where Valve’s HMD Designer software comes into play (visualizations can be seen in my Day 2 post, as well as in future content).  Additionally, adding more than 4 sensors in the same visible area creates redundancies, and (while it will increase the cost of the device) redundancies can only help maintain even more stable tracking.  The maximum sensors that can be used in the present firmware are 32 sensors on one device, and this device is an HMD with very few inputs.  For a controller with 1 analog and 5 digital inputs (and the x/y serial input of the trackpad), I believe the sensor cap is 28.

So that about wraps up an in-depth review of the SteamVR Tracking system!  Again, my apologies that it took so long to post publicly; really hoping to be in a steady rhythm for this blog now.  Feel free to let me know if any of these points require clarification.

Answering Community Questions

Below are (paraphrased) questions that were raised by members of the community on Reddit, by email, or here on the blog.  If you asked a question through any of these media and it is not featured here, it may be that it was very specific, or covered in great depth in a previous post.  If you don’t personally hear from me sometime in the next week, feel free to ping me with that question again.

Any word on larger tracked volumes / more than 2 lighthouses?
Asked in combination by Lanfeix on Reddit and Bernd K. over email
Valve is always prototyping and testing many ideas, multiple lighthouses certainly being one of them.  As far as licensees are aware, however, no one design has been locked down yet.  It was mentioned at Steam Dev Days that Valve hopes for lighthouse to be as ubiquitous as WiFi in the future, so take that for what it’s worth!

What are other developers working on?  Wireless HMDs, body tracking, modified controllers, etc…
Asked in combination by Ducksdoctor on RedditGuichla on Reddit, and darrellspivey on Reddit
I am not at liberty to disclose any developmental plans for my colleagues from the training.  Regarding the wireless HMD modifications, however, TPCAST was announced as a Vive accessory while we were in training.  And as for body tracking, I can assure you that is a very frequently requested device / suite (and I’m personally working on evolving the Talaria locomotion wearable to include lighthouse tracking; will open up foot-tracking to the masses, plus many other awesome features!  More info on that to come over the next month or so)

What resources are available to the public to prototype for lighthouse?  Will audio/video recordings and documentation ever be made publicly available?
Asked in combination by Mentalyspoonfed on Redditthebigman433 on Reddit, VRGameZone on Reddit, and over email
Anyone is welcome to play around with the technology – Triad Semiconductor sells their TS3633 castellated modules for about $7 a pop, and they have everything on board to sense the IR signal from a basestation and convert it to a digital output.  That being said, without access to the hardware (which comes with the license), you will have no way of communicating that sensor data to SteamVR – you would be required to run motion capture in your own application (but that’s not to say it can’t be done!).  At this time, taking Synapse’s SteamVR Tracking training is required to ensure everyone is properly prepared to develop for lighthouse in these early days of the technology.  In the future, Valve has intentions to make the training and documentation fully available to the public, but that will likely come after this first batch of trainees have enough time to give feedback and work out the kinks.  And I personally was able to take an audio recording of the course, however that is restricted to internal use only.

I thought the magic number of required sensors used to be five – why do you keep saying four?  Is it possible to capture position using just one sensor?
Asked in combination by Mark Hogan on the blog and Bernd K. over email
As stated above, one sensor will let you lock down an x and y radial coordinate, but that sensor could exist anywhere in the radial z direction.  A second sensor locks down your rotation about the z-axis, but again position in the z direction will still be ambiguous (could be anywhere between far away and facing perpendicular to the basestation, or up very close at an extreme angle.  Both sensors will still be hit at the same times).  Adding in a third (non-collinear) sensor will allow you to establish the z-position of your device.  Then the reason for a fourth (non-coplanar) sensor is to aid in solidifying x- and y-axis rotations, as without the parallax of that fourth sensor it is very difficult to determine minor changes in x- and y- rotation.  As for why Valve used to say that you needed five to lock down a position, we don’t really know.  Stable tracking is successfully achieved with four – perhaps it was a safety measure for redundancy.

How do tracked objects know which lighthouse hit it?
Asked by Mark Hogan on the blog
The sync pulse from each base station lasts for a different amount of time depending on which channel that base is set to.  The IR sensors on each device can see this difference and it acts as a means of identification for each lighthouse.

What’s the difference between a ‘chiclet’ and the TS3633-CM1?
Asked by Bernd K. over email
Both modules are functionally the same.  The chiclet provided to developers by Synapse is a more compact, dual-sided design that has a flex cable connector on it.  The CM1 is single-sided and pins out to castellations.

What’s going to be covered at the SXSW SteamVR Tracking session?
Asked by VRGameZone on Reddit
This session will be an overview of what will be covered by the training.  Pretty much all the content we learned in Day 1 (not the full training).  To my understanding, there is no need to sign up; simply attend!

So that’s everything you folks have asked me so far!  I hope the community finds the information here useful.  If you have any more questions, feel free to throw them to the comments, or email me at blog@talariavr.com.  I’ve been quite busy this past month, so get excited for what’s coming next!

Leave a Reply

avatar
  Subscribe  
Notify of