Saved searches

Use saved searches to filter your results more quickly

Cancel Create saved search Sign up Reseting focus

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

An AI driven, camera based Full Body tracking solution for VR systems

Notifications You must be signed in to change notification settings

MasonSakai/VR-AI-Full-Body-Tracking

This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Go to file

Folders and files

Last commit message Last commit date

Latest commit

History

View all files

Repository files navigation

VR-AI-Full-Body-Tracking

An AI driven, camera based Full Body tracking solution for VR systems This is free, besides the cost of cameras, which are fairly cheap (you can use your phone, and naturally the computer's webcam) This program, using OpenVR Input Emulator, can create virtual trackers and using cameras placed around the room it can mimic real, vive full body trackers. Each camera has an associated browser page running a tensorflow pose detection model (AI that can tell where your body is), and the more cameras at different angles the better. Works on any camera if it can be recognized by the browser as a webcam. GoPros seem weird, but there are workarounds. This program features basic Playspace Mover functionality! I need to refine it (and on some systems it only half works?), but it has it and is configurable through the UI. Supports up to 12 point tracking (including the headset and hands, meaning 9 additional trackers); knees are a bit redundant unless the IK is not good enough for you (or there is no IK) Can be made to room scale with a few well placed cameras, helps if they're wide FOV Not only works for VRChat, but has also been tested in Source Film Maker with a VR plugin! Also should work for any headset or other system as long as it uses SteamVR/OpenVR

Notes

This program practically requires at least two cameras, and requires decent placement. Details below.
The AI does not capture 3D position or orientation. That is all done by the program and is prone to issues or inaccuracies.
Hip tracking side-to-side is not great, as the AI can't capture this movement very well (sorry, no wiggles). Hip tracking is fine otherwise
This may be slow on some hardware, especially since the AI is entirely CPU based and running in a browser (if someone can get this on a better system, network details are in the Remote1CamProcessing's readme)
If the markers on the camera are weirdly or inconsistently offset from the person in frame (noticed using GoPro for example), using OBS Virtual Camera with a solid background layer can make sure it keeps a consistent size (I can't do much about this, it's just part of how the AI initializes)
This has only been tested on windows with chrome, and with Qt may only work on windows
The program (at the time of writing this) is only mildly tested and may be prone to issues. At the moment I don't have access to a headset so I can't do much without feedback.
This program has been developed on the quest 2 over airlink. I do not know the vive controller layout or button masks, so I will need to be given them and will update this later.

Installation

I don't have any real installer
Just unzip the zip file wherever you want (though it needs to be able to write to config.json)
It requires OpenVR Input Emulator, you will likely need this patch for this program to work
https://github.com/Louka3000/OpenVR-InputEmulator-Fixed/releases/tag/v0.1-pimax

As mentioned, each active camera needs a different browser window
These can be accessed at 127.0.0.1:portNumber if run on the local device, otherwise you need the local IP of the computer running the VR
The port number by default is 2674, but this can be changed in the config.json file
If you're running on another computer, you need to set the flag Insecure origins treated as secure to true with the target URL in your browser (chrome://flags/ on chrome, similar for others)
It needs camera permisions and once you set the camera you need to press apply before starting (and I don't recommend changing while it's running)
If you need to add more cameras, at the moment this is only done through the config.json file
Once a camera is running, it must be calibrated from VR
You will be given instructions both within VR and in the console I would also not recommend recentering after calibrating cameras, or calibrating a new camera after recentering if others are active
If you need to, after recalibrating there should be a system to recenter the cameras by recalibrating one of them, but if this doesn't work you must calibrate them all
And adding a camera after recentering without recalibrating the others will cause issues until the others are recalibrated
tldr; Just don't recenter If the trackers dont appear in the middle of the steamvr playspace to begin with, they must be calibrated in the trackers tab
They will follow the position and orientation of your hands, and once they're in a reachable spot press and hold your interact button
You will then move your controller to their position and release the button
This should calibrate them, and it may take time to figure out where you need to be relative to the controllers to center them better
It is also easier to do this before calibrating any cameras
If they're rotated and aren't really following your controllers, restart steamvr, I can't do much about that

Camera Count and Placement