By raising the right hand above the head a random image (from an array) will be generated. The user will then be able to move the image wherever they want. Resizing it using the left hands x and y position. When the user wants to place the image they will move the right hand forward to solidify its location and size. The image "layer" is dependent on the z location of the left hand. Coded in Processing using theSimpleOpenNI library.
By raising the right hand above the head, you spawn a new image, which then follows the x and y values of that hand. Then, to resize it you use the left hands x and y values. To decide on the layer, you move the left hand forward, it’s dependent on the distance it goes from the head. To place an image, you use the right hand and push it forward in space. Also, dependent on the position of the head.
The code to select a random image had to be placed when the boolean created to control if an image was picked up was set to false, otherwise, it would just cycle through each image and not display just one. There was this weird hiccup when I was trying to display the images. It kept returning a null pointer because the array list was starting with no items but it was still trying to display it. Even adding in, for it to only be able to run when one item was present wasn’t working. However, adding in that the for statement couldn’t run if the array was null fixed the issue. Implementing the layers was one of the biggest hurdles in this project. At first, I was creating separate array lists for them, and that wasn’t working well. Once I switched it over to place the images at different positions inside one array list the project started working better.
The basics of coding for an Xbox 360 Kinect sensor, as well as the limitations of the technology. How to code for events to happen based on skeleton tracking from the Kinect.