Virtual Reality requires several devices such as a headset, a computer/smartphone or another machine to create a digital environment, and a motion-tracking device in some cases. Typically, a headset displays content before a user’s eyes, while a cable (HDMI) transfers images to the screen from a PC. The alternative option is headsets working with smartphones, like Google Cardboard and GearVR – a phone that acts both as a display and a source of VR content.
Some vendors apply lenses to change flat images into three-dimensional. Usually, a 100/110-degree field of sight is achieved with VR devices. The next key feature is the frame rate per second, which should be 60 fps at a minimum to make virtual simulations look realistically enough.
For user interaction there are several options:
The head tracking system in Virtual Reality headsets follows the movements of your head to sides and angles. It assigns X, Y, Z axis to directions and movements, and involves tools like accelerometer, gyroscope, a circle of LEDs (around the headset to enable the outside camera). Head tracking requires low latency, i.e. 50 milliseconds or less, otherwise, users will notice the lag between head movements and a simulation.
Some headsets contain an infrared controller that tracks the direction of your eyes inside a virtual environment. The major benefit of this technology is to get a more realistic and deeper field of view.
Though not engineered and implemented well enough yet, motion tracking would raise VR to a totally new level. The thing is, that without motion tracking you’d be limited in VR – unable to look around and move around. Through concepts of the 6DoF (six degrees of freedom) and 3D space, options to support motion tracking fall into 2 groups, optical and non-optical tracking. Optical tracking is typically a camera on a headset to follow the movements, while non-optical means the use of other sensors on a device or a body. Most of the existing devices actually combine both options.