SteamVR Tracking License Day 1
We started out with a tour around the Synapse offices (Synapse is the product design team Valve worked with on multiple concepts for the Steam Link and Steam Controller, and they have been integral in streamlining the technical side of the SteamVR system). It’s a brilliantly creative environment with an open floor plan, a rock wall for destressing, and dogs were scurrying around everywhere, even in the elevators. Only 10 other people were in the training with me (this is normal, one training course can teach between 10 and 12 people). This makes it feel really personal, and asking questions and receiving feedback specifically tailored to each individuals’ design concepts is a massive benefit of this structure. And by the way, for those of you wanting this license to be more easily accessible, Synapse/Valve hope to release all training onto Steam / digital distribution in the “future” (it’s ambiguous how long before that’s a thing – could be months, could be years).
So, after receiving an overview of the course we got into the thick of training. The first day was only taught from 2:00 – 5:00. We tore into a pretty in-depth analysis of lighthouse functionality, protocol, and science. A big takeaway is that lighthouse is phenomenal at detecting x and y sensor positions, however knowing the z position is currently its limiting factor (a good amount of the content I’ll be posting here may be things you already know, or could find out with some friendly maths. This is because Valve has been pretty open about their tech, and anything that is new information is often meant for licensees specifically). You need multiple sensors of significant length apart to be able to determine the distance an object is from a basestation (the farther away an object is, the faster the lighthouse scan will hit all the sensors. This timing is what determines z position. If you are unfamiliar with the specifics of lighthouse tracking, feel free to check out Valve’s website, any number of press articles, or Oliver Kreylos‘s very in-depth analysis of the tracking system). Geometry and math dictate that you need 4 sensors to catch a position. Of these four sensors, one must reside outside the plane that the other 3 exist on. That’s really the core when it comes to sensor placement. The clock speed on the current boards is not an issue when it comes to positional resolution; where the real limitations lie are actually in basestation rotor stability. Any potential for the slightest wobble or jitter has been dramatically reduced through careful engineering on Valve and Synapse’s part; the rotors rely on a sort of liquid bearing to turn in (as a solid contact would wear and give potential for uneven forces). So with all those problems identified, to optimize tracking we learned two key concepts: maximize the distance between sensors (this allows for better z position calculation), and to ensure that sensors are placed outside of the plane which other sensors create (this helps fight any rotational ambiguities).
The development board (codename Watchman V3) they give us for development is approximately a square inch PCB. If you’re interested in some of its technical insides, check out this iFixit article – it contains the components of the Watchman V2, so it should give you a pretty good idea. It looks like prototyping will be extremely quick and easy, simply placing and wiring IR sensors, then telling the device those sensor locations. No need to develop our own boards if we don’t need to – schematics for the Watchman V3 are included with the training, so we can produce as many as our hearts desire. More solidified info regarding development hardware to come over the next two days. Also something that helps speed up prototyping is that the firmware for these boards is locked down, as to preserve uniformity in performance and behaviour among all third-party devices. Synapse did say that if necessary, devs can work with them and Valve to customize firmware, and modify OpenVR to match. Alternatively, we can shuttle data through two separate streams if we want to independently customize input/output.
We are obviously going into significantly more detail during the course than whatever content I’m sharing here, however I’m only sharing what I’m comfortable with. Some content is rightfully reserved for those attending the course in person.
If you’ve got any feedback on this post or questions for me while I’m still at the training, post here in the comments or email me at firstname.lastname@example.org. With regards to those of you that have already reached out to me on Reddit / by email, expect a summarizing post after the training is over where I hopefully answer all of your questions. Will report in again tomorrow!
Quality content, thanks a lot for sharing this!