Archives du mot-clé skeletal tracking

The New Generation Kinect for Windows Sensor is Coming Next Year

The all-new active-infrared capabilities allow the new sensor to work in nearly any lighting condition. This makes it possible for developers to build apps with enhanced recognition of facial features, hand position, and more.By now, most of you likely have heard about the new Kinect sensor that Microsoft will deliver as part of Xbox One later this year. 

Today, I am pleased to announce that Microsoft will also deliver a new generation Kinect for Windows sensor next year. We’re continuing our commitment to equipping businesses and organizations with the latest natural technology from Microsoft so that they, in turn, can develop and deploy innovative touch-free applications for their businesses and customers. A new Kinect for Windows sensor and software development kit (SDK) are core to that commitment.

Both the new Kinect sensor and the new Kinect for Windows sensor are being built on a shared set of technologies. Just as the new Kinect sensor will bring opportunities for revolutionizing gaming and entertainment, the new Kinect for Windows sensor will revolutionize computing experiences. The precision and intuitive responsiveness that the new platform provides will accelerate the development of voice and gesture experiences on computers.

Some of the key capabilities of the new Kinect sensor include:

  • Higher fidelity
    The new sensor includes a high-definition (HD) color camera as well as a new noise-isolating multi-microphone array that filters ambient sounds to recognize natural speaking voices even in crowded rooms. Also included is Microsoft’s proprietary Time-of-Flight technology, which measures the time it takes individual photons to rebound off an object or person to create unprecedented accuracy and precision. All of this means that the new sensor recognizes precise motions and details, such as slight wrist rotation, body position, and even the wrinkles in your clothes. The Kinect for Windows community will benefit from the sensor’s enhanced fidelity, which will allow developers to create highly accurate solutions that see a person’s form better than ever, track objects and environments with greater detail, and understand voice commands in noisier settings than before.

The enhanced fidelity and depth perception of the new Kinect sensor will allow developers to create apps that see a person's form better, track objects with greater detail, and understand voice commands in noisier settings.
The enhanced fidelity and depth perception of the new Kinect sensor will allow developers to
create apps that see a person's form better, track objects with greater detail, and understand
voice commands in noisier settings.

  • Expanded field of view
    The expanded field of view accommodates a multitude of differently sized rooms, minimizing the need to modify existing room configurations and opening up new solution-development opportunities. The combination of the new sensor’s higher fidelity plus expanded field of view will give businesses the tools they need to create truly untethered, natural computing experiences such as clicker-free presentation scenarios, more dynamic simulation and training solutions, up-close interactions, more fluid gesture recognition for quick interactions on the go, and much more.
        
  • Improved skeletal tracking
    The new sensor tracks more points on the human body than previously, including the tip of the hand and thumb, and tracks six skeletons at once. This not only yields more accurate skeletal tracking, it opens up a range of new scenarios, including improved “avateering,” the ability to develop enhanced rehabilitation and physical fitness solutions, and the possibility to create new experiences in public spaces—such as retail—where multiple users can participate simultaneously.

The new sensor tracks more points on the human body than previously and tracks six skeletons at once, opening a range of new scenarios, from improved "avateering" to experiences in which multiple users can participate simultaneously.
The new sensor tracks more points on the human body than previously, including the tip of the hand
and thumb, and tracks six skeletons at once. This opens up a range of new scenarios, from improved
« avateering » to experiences in which multiple users can participate simultaneously.
  

  • New active infrared (IR)
    The all-new active-IR capabilities allow the new sensor to work in nearly any lighting condition and, in essence, give businesses access to a new fourth sensor: audio, depth, color…and now active IR. This will offer developers better built-in recognition capabilities in different real-world settings—independent of the lighting conditions—including the sensor’s ability to recognize facial features, hand position, and more. 

I’m sure many of you want to know more. Stay tuned; at BUILD 2013 in June, we’ll share details about how developers and designers can begin to prepare to adopt these new technologies so that their apps and experiences are ready for general availability next year.

A new Kinect for Windows era is coming: an era of unprecedented responsiveness and precision.

Bob Heddle
Director, Kinect for Windows

Key links

 
Photos in this blog by STEPHEN BRASHEAR/Invision for Microsoft/AP Images

 

Inside the Newest Kinect for Windows SDK – Infrared Control

Inside the Newest Kinect for Windows SDK—Infrared ControlThe Kinect for Windows software development kit (SDK) October release was a pivotal update with a number of key improvements. One important update in this release is how control of infrared (IR) sensing capabilities has been enhanced to create a world of new possibilities for developers.

IR sensing is a core feature of the Kinect sensor, but until this newest release, developers were somewhat restrained in how they could use it. The front of the Kinect for Windows sensor has three openings, each housing a core piece of technology. On the left, there is an IR emitter, which transmits a factory calibrated pattern of dots across the room in which the sensor resides. The middle opening is a color camera. The third is the IR camera, which reads the dot pattern and can help the Kinect for Windows system software sense objects and people along with their skeletal tracking data.

Lire la suite

Kinect for Windows releases SDK update and launches in China

I’m very pleased to announce that the latest Kinect for Windows runtime and software development kit (SDK) have been released today. I am also thrilled to announce that the Kinect for Windows sensor is now available in China.

Developers and business leaders around the world are just beginning to realize what’s possible when the natural user interface capabilities of Kinect are made available for commercial use in Windows environments. I look forward to seeing the innovative things Chinese companies do with this voice and gesture technology, as well as the business and societal problems they are able to solve with it.

Kinect for Windows availability: current and coming soon

 

The updated SDK gives developers more powerful sensor data tools and better ease of use, while offering businesses the ability to deploy in more places. The updated SDK includes:

Extended sensor data access

  • Data from the sensor’s 3-axis accelerometer is now exposed in the API. This enables detection of the sensor’s orientation.
  • Extended-range depth data now provides details beyond 4 meters. Extended-range depth data is data beyond the tested and certified ranges and is therefore lower accuracy. For those developers who want access to this data, it’s now available.
  • Color camera settings, such as brightness and exposure, can now be set explicitly by the application, allowing developers to tune a Kinect for Windows sensor’s environment.
  • The infrared stream is now exposed in the API. This means developers can use the infrared stream in many scenarios, such as calibrating other color cameras to the depth sensor or capturing grayscale images in low-light situations.
  • The updated SDK used with the Kinect for Windows sensors allows for faster toggling of IR to support multiple overlapping sensors.

Access to all this data means new experiences are possible: Whole new scenarios open up, such as monitoring manufacturing processes with extended-range depth data. Building solutions that work in low-light settings becomes a reality with IR stream exposure, such as in theaters and light-controlled museums. And developers can tailor applications to work in different environments with the numerous color camera settings, which enhance an application’s ability to work perfectly for end users.

One of the new samples released demonstrates a best-in-class UI based on the Kinect for Windows

One of the new samples released demonstrates a best-in-class UI based on the Kinect for Windows
Human Interface Guidelines called the Basic Interactions – WPF sample.

Improved developer tools

  • Kinect Studio has been updated to support all new sensor data features.
  • The SDK ships with a German speech recognition language pack that has been optimized for the sensor’s microphone array.
  • Skeletal tracking is now supported on multiple sensors within a single application.
  • New samples show how to use all the new SDK features. Additionally, a fantastic new sample has been released that demonstrates a best-in-class UI based on the Kinect for Windows Human Interface Guidelines called the Basic Interactions – WPF sample.

We are committed to continuing to make it easier and easier for developers to create amazing applications. That’s why we continue to invest in tools and resources like these. We want to do the heavy lifting behind the scenes so the technologists using our platform can focus on making their specific solutions great. For instance, people have been using our Human Interface Guidelines (HIG) to design more natural, intuitive interactions since we released last May. Now, the Basic Interactions sample brings to life the best practices that we described in the HIG and can be easily repurposed.

Greater support for operating systems

  • Windows 8 compatibility. By using the updated Kinect for Windows SDK, you can develop a Kinect for Windows solution for Windows 8 desktop applications.
  • The latest SDK supports development with Visual Studio 2012 and the new Microsoft .NET Framework 4.5.
  • The Kinect for Windows sensor now works on Windows running in a virtual machine (VM) and has been tested with the following VM environments: Microsoft Hyper-V, VMWare, and Parallels. 

Windows 8 compatibility and VM support now mean Kinect for Windows can be in more places, on more devices. We want our business customers to be able to build and deploy their solutions where they want, using the latest tools, operating systems, and programming languages available today.

This updated version of the SDK is fully compatible with previous commercial versions, so we recommend that all developers upgrade their applications to get access to the latest improvements and to ensure that Windows 8 deployments have a fully tested and supported experience.

As I mentioned in my previous blog post, over the next few months we will be making Kinect for Windows sensors available in seven more markets: Chile, Colombia, the Czech Republic, Greece, Hungary, Poland, and Puerto Rico. Stay tuned; we’ll bring you more updates on interesting applications and deployments in these and other markets as we learn about them in coming months.

Craig Eisler
General Manager, Kinect for Windows

Key Links

Nissan Pathfinder Virtual Showroom is Latest Auto Industry Tool Powered by Kinect for Windows

(Please visit the site to view this video)

Automotive companies Audi, Ford, and Nissan are adopting Kinect for Windows as a the newest way to put a potential driver into a vehicle. Most car buyers want to get « hands on » with a car before they are ready to buy, so automobile manufacturers have invested in tools such as online car configurators and 360-degree image viewers that make it easier for customers to visualize the vehicle they want.

Now, Kinect's unique combination of camera, body tracking capability, and audio input can put the car buyer into the driver's seat in more immersive ways than have been previously possible—even before the vehicle is available on the retail lot!

The most recent example of this automotive trend is the 2013 Nissan Pathfinder application powered by Kinect for Windows, which was originally developed to demonstrate the new Pathfinder at auto shows before there was a physical car available.

Nissan quickly recognized the value of this application for building buzz at local dealerships, piloting it in 16 dealerships in 13 states nationwide.

« The Pathfinder application using Kinect for Windows is a game changer in terms of the way we can engage with consumers, » said John Brancheau, vice president of marketing at Nissan North America. « We’re taking our marketing to the next level, creating experiences that enhance the act of discovery and generate excitement about new models before they’re even available. It’s a powerful pre-sales tool that has the potential to revolutionize the dealer experience. »

Digital marketing agency Critical Mass teamed with interactive experience developer IdentityMine to design and build the Kinect-enabled Pathfinder application for Nissan. « We’re pioneering experiences like this one for two reasons: the ability to respond to natural human gestures and voice input creates a rich experience that has broad consumer appeal, » notes Critical Mass President Chris Gokiert. « Additionally, the commercial relevance of an application like this can fulfill a critical role in fueling leads and actually helping to drive sales on site. »

Each dealer has a kiosk that includes a Kinect for Windows sensor, a monitor, and a computer that’s running the Pathfinder application built with the Kinect for Windows SDK. Since the Nissan Pathfinder application first debuted at the Chicago Auto Show in February 2012, developers made several enhancements, including a new pop-up tutorial, and interface improvements, such as larger interaction icons and instructional text along the bottom of the screen so a customer with no Kinect experience could jump right in. "In the original design for the auto show, the application was controlled by a trained spokesperson. That meant aspects like discoverability and ease-of-use for first-time users were things we didn’t need to design for," noted IdentityMine Research Director Evan Lang.

Now, shoppers who approach the Kinect-based showroom are guided through an array of natural movements—such as extending their hands, stepping forward and back, and leaning from side to side—to activate hotspots on the Pathfinder model, allowing them to inspect the car inside and out.

Shoppers who approach the Kinect-based showroom are guided through an array of natural movements that allow them to inspect the car inside and out.The project was not, however, without a few challenges. The detailed Computer-Aided Design (CAD) model data provided by Nissan, while ideal for commercials and other post-rendered uses, did not lend itself easily to a real-time engine. "A lot of rework was necessary that involved 'retopolgizing' the mesh," reported IdentityMine’s 3D Design Lead Howard Schargel. "We used the original as a template and traced over to get a cleaner, more manageable polygon count. We were able to remove much more than half of the original polygons, allowing for more fluid interactions and animations while still retaining the fidelity of the client's original model."

And then, the development team pushed further. "The application uses a dedicated texture to provide a dynamic, scalable level of detail to the mesh by adding or removing polygons, depending on how close it is to the camera,” explained Schargel. “It may sound like mumbo jumbo—but when you see it, you won't believe it."

You can see the Nissan Pathfinder app in action at one of the 16 participating dealerships or by watching our video case study.

Kinect for Windows Team

Key Links

KinÊtre Uses Kinect for Windows to Animate Objects in Real Time

KinÊtre Uses Kinect for Windows to Animate Objects Quickly in Real TimeTraditional digital animation techniques can be costly and time-consuming. But KinÊtre—a new Kinect for Windows project developed by a team at Microsoft Research Cambridge—makes the process quick and simple enough that anyone can be an animator who brings inanimate objects to life.

KinÊtre uses the skeletal tracking technology in the Kinect for Windows software development kit (SDK) for input, scanning an object as the Kinect sensor is slowly panned around it.  The KinÊtre team then applied their expertise in cutting-edge 3-D image processing algorithms to turn the object into a flexible mesh that is manipulated to match user movements tracked by the Kinect sensor.

Microsoft has made deep investments in Kinect hardware and software. This enables innovative projects like KinÊtre, which is being presented this week at SIGGRAPH 2012, the International Conference and Exhibition on Computer Graphics and Interactive Techniques. Rather than targeting professional computer graphics (CG) animators, KinÊtre is intended to bring mesh animation to a new audience of novice users.

Shahram Izadi, one of the tool's creators at Microsoft Research Cambridge, told me that the goal of this research project is to make this type of animation much more accessible than it's been—historically requiring a studio full of trained CG animators to build these types of effects. "KinÊtre makes creating animations a more playful activity," he said. "With it, we demonstrate potential uses of our system for interactive storytelling and new forms of physical gaming."

This incredibly cool prototype reinforces the world of possibilities that Kinect for Windows can bring to life and even, perhaps, do a little dance.

Peter Zatloukal,
Kinect for Windows Engineering Manager

Key Links