Game Design
Using Ruby with Xbox Kinect for Fun and Profit

Summarized using AI

Using Ruby with Xbox Kinect for Fun and Profit

Nate Peel • February 17, 2015 • Earth

In this video titled "Using Ruby with Xbox Kinect for Fun and Profit," Nate Peel discusses the integration of Ruby programming with the Xbox Kinect device. The session, presented at MountainWest RubyConf 2011, showcases the potential of using Kinect technology for interactive applications and game development. Nate begins by introducing the Kinect, noting its massive commercial success as a motion-sensing device that allows users to control games using body movements, eliminating the need for traditional controllers. He explains the technical components of the Kinect, including its infrared and RGB cameras that work in conjunction to capture depth and visuals.

Nate highlights the importance of the driver developed for Kinect that enables its interaction with computer applications. He references tools such as 'GLview' for visualizing depth data and settings within the Kinect. Moving deeper into programming, he explains his creation of a Ruby wrapper for the Kinect, which simplifies the retrieval of depth and image data in Ruby applications.

Key points include:

- Overview of Kinect Technology: The Kinect comprises dual cameras and an infrared laser projector for depth sensing.

- Driver Development: The OpenNI driver allows natural interaction with Kinect data through computer applications.

- Ruby Library Creation: Nate developed a Ruby library that captures skeletal data and gestures, making it easier for developers to integrate Kinect capabilities.

- Real-time Visualization: He demonstrates how skeletal data can be visualized in real-time through a processing sketch, showcasing user movements captured by Kinect.

- Challenges with Ambient Light: Nate discusses the effects of environmental conditions on motion capture accuracy.

- Code Breakdown: Throughout the presentation, he explains the structure and functionality of the code used to process skeleton joint data.

- Collaboration Encouragement: He invites developers familiar with C++ or Ruby to join in enhancing the library, emphasizing communal growth in software development.

In conclusion, this session illustrates how Ruby can effectively leverage Kinect's motion detection and visualization capabilities, offering exciting opportunities for developers interested in interactive projects and real-time data processing.

Using Ruby with Xbox Kinect for Fun and Profit
Nate Peel • February 17, 2015 • Earth

By Nate Peel
The Xbox Kinect is an exciting new way to interact with not only your Xbox 360, but your computer. In this segment Nate will show you how to use Ruby to interact with a Kinect, and gather skeleton sequences for an animation that can be imported into a 3d program for use in a game or movie.

Help us caption & translate this video!

http://amara.org/v/GJEu/

MountainWest RubyConf 2011

00:00:14.530 Hello, my name is Nate Peel, and I'm here to talk about using Ruby with the Xbox Kinect. I work at an agency where I did iPhone development and Ruby on Rails.
00:00:23.420 One of the cool projects I've been involved in is a karaoke game that integrates with the Kinect. Currently, I work at Wasatch Front, a listing service that helps people buy and sell homes online.
00:00:40.489 For those of you who don’t know, Kinect is a device that was released with the Xbox 360 in November, and it quickly became the fastest-selling electronic device ever, selling eight million units in just four months. Interestingly, Microsoft developed the technology but licensed it from a company called PrimeSense.
00:01:06.409 The Kinect allows users to control video games using body movements—no controller needed. You can wave your arms, jump, and move around, and the Kinect can also capture images of you while you're playing, which is really interesting.
00:01:28.060 So, how does it work? The Kinect has two cameras on the front: one infrared camera and one RGB camera. It also includes an infrared laser projector that projects a grid of micro dots that help the device sense depth. Both cameras have a resolution of 640x480 and can capture at 30 frames per second.
00:02:31.630 You can visualize the depth data; the patterns projected on the ground help determine the distance between the camera and any objects. In addition to this technology, a driver was developed to allow interaction with the Kinect on your computer. This driver enables you to gather data from the Kinect.
00:03:19.580 This driver is available under the Apache license, and you can download it to use with your Kinect. If you are using a Mac, you can also utilize tools available for that platform. One such tool is a utility called 'GLview,' which gives you a live view from the Kinect's RGB camera and infrared sensors.
00:03:41.540 With GLview, you can see how depth data is visualized. You can control the Kinect through this program, adjusting settings like the LED light and camera mode. As I demonstrate here, the patterns projected by the Kinect can be observed on different surfaces.
00:05:02.370 Next, I decided to create a Ruby wrapper for Kinect to facilitate its usage within Ruby applications. This wrapper can retrieve depth information as well as image data from the Kinect.
00:05:23.330 In the code I’ll show you, we initialize the Kinect and capture the depth and image data. This example is a bit more complex as it uses OpenGL to render the depth data in real time.
00:06:01.180 There is a driver called the 'OpenNI' driver, released by PrimeSense, the same company that created the technology behind the Kinect. This driver allows for natural interactions with the Kinect without needing additional devices.
00:07:03.090 To utilize this effectively in Ruby, I created a library that makes it easier to capture skeletal data and gestures from the Kinect. This includes capturing swipe gestures and other interactions.
00:07:29.729 The library I developed streamlines the process of working with the Kinect's skeletal data, making it an exciting and futuristic tool for developers.
00:08:14.700 I also created a Ruby processing sketch that takes skeletal data from the Kinect and renders it on screen. This allows for real-time visualization of the skeleton data captured by the Kinect.
00:09:40.330 As we proceed, I'll run an example that shows this data being rendered in real time, allowing us to see how the Kinect captures the user's movements.
00:10:43.030 We've encountered some challenges with ambient light affecting the Kinect's ability to accurately capture movement; for example, in my home, there was quite a bit of interference.
00:12:23.790 This tool essentially broadcasts captured data as packets, which can be processed by Ruby scripts to achieve real-time visualization.
00:12:51.060 I’ll take a moment to review the code that processes this skeletal data, explaining how each section works to handle incoming joint data.
00:14:44.290 In this code, I demonstrate how the joints are represented in a class structure and how we can listen for updates to joint positions in real time.
00:15:57.180 The culmination of this work has led to a Ruby library that enhances the ways developers can interact with the Kinect for capturing motion data.
00:17:38.880 I encourage anyone with experience in C++ or those interested in contributing to the library to reach out, as collaboration can help improve this project further.
00:19:13.730 Thank you all for your attention. Does anyone have any questions?
Explore all talks recorded at MountainWest RubyConf 2011
+13