Archives du mot-clé kinect SDK

OpenNI 2.0 supporte maintenant les pilotes Microsoft Kinect

Avec OpenNI 2.0, il est maintenant possible d’utiliser OpenNI et NITE avec les drivers officiels Microsoft Kinect.
Plus besoin de jongler entre les drivers Microsoft et ceux de PrimeSense.

Plus d’informations : http://channel9.msdn.com/coding4fun/kinect/Opening-the-world-of-OpenNI-and-the-Kinect-SDK

Inside the Newest Kinect for Windows SDK – Infrared Control

Inside the Newest Kinect for Windows SDK—Infrared ControlThe Kinect for Windows software development kit (SDK) October release was a pivotal update with a number of key improvements. One important update in this release is how control of infrared (IR) sensing capabilities has been enhanced to create a world of new possibilities for developers.

IR sensing is a core feature of the Kinect sensor, but until this newest release, developers were somewhat restrained in how they could use it. The front of the Kinect for Windows sensor has three openings, each housing a core piece of technology. On the left, there is an IR emitter, which transmits a factory calibrated pattern of dots across the room in which the sensor resides. The middle opening is a color camera. The third is the IR camera, which reads the dot pattern and can help the Kinect for Windows system software sense objects and people along with their skeletal tracking data.

Lire la suite

Inside the Kinect for Windows SDK Update with Peter Zatloukal and Bob Heddle

Now that the updated Kinect for Windows SDK  is available for download, Engineering Manager Peter Zatloukal and Group Program Manager Bob Heddle sat down to discuss what this significant update means to developers.

Bob Heddle demonstrates the new infrared functionality in the Kinect for Windows SDK 
Bob Heddle demonstrates the new infrared functionality in the Kinect for Windows SDK.

Why should developers care about this update to the Kinect for Windows Software Development Kit (SDK)?

Bob: Because they can do more stuff and then deploy that stuff on multiple operating systems!

Peter: In general, developers will like the Kinect for Windows SDK because it gives them what I believe is the best tool out there for building applications with gesture and voice.

In the SDK update, you can do more things than you could before, there’s more documentation, plus there’s a specific sample called Basic Interactions that’s a follow-on to our Human Interface Guidelines (HIG). Human Interface Guidelines are a big investment of ours, and will continue to be. First we gave businesses and developers the HIG in May, and now we have this first sample, demonstrating an implementation of the HIG. With it, the Physical Interaction Zone (PhIZ) is exposed. The PhIZ is a component that maps a motion range to the screen size, allowing users to comfortably control the cursor on the screen.

This sample is a bit hidden in the toolkit browser, but everyone should check it out. It embodies best practices that we described in the HIG and is can be re-purposed by developers easily and quickly.

Bob: First we had the HIG, now we have this first sample. And it’s only going to get better. There will be more to come in the future.

Why upgrade?

Bob: There’s no downside to upgrading, so everyone should do it today! There are no breaking changes; it’s fully compatible with previous releases of the SDK, it gives you better operating support reach, there are a lot of new features, and it supports distribution in more countries with localized setup and license agreements. And, of course, China is now part of the equation.

Peter: There are four basic reasons to use the Kinect for Windows SDK and to upgrade to the most recent version:

  • More sensor data are exposed in this release.
  • It’s easier to use than ever (more samples, more documentation).
  • There’s more operating system and tool support (including Windows 8, virtual machine support, Microsoft Visual Studio 2012, and Microsoft .NET Framework 4.5).
  • It supports distribution in more geographical locations. 

What are your top three favorite features in the latest release of the SDK and why?

Peter: If I must limit myself to three, then I’d say the HIG sample (Basic Interactions) is probably my favorite new thing. Secondly, there’s so much more documentation for developers. And last but not least…infrared! I’ve been dying for infrared since the beginning. What do you expect? I’m a developer. Now I can see in the dark!

Bob: My three would be extended-range depth data, color camera settings, and Windows 8 support. Why wouldn’t you want to have the ability to develop for Windows 8? And by giving access to the depth data, we’re giving developers the ability to see beyond 4 meters. Sure, the data out at that range isn’t always pretty, but we’ve taken the guardrails off—we’re letting you go off-roading. Go for it!

New extended-range depth data now provides details beyond 4 meters. These images show the difference between depth data gathered from previous SDKs (left) versus the updated SDK (right). 
New extended-range depth data now provides details beyond 4 meters. These images show the difference between depth data gathered from previous SDKs (left) versus the updated SDK (right).

Peter: Oh yeah, and regarding camera settings, in case it isn’t obvious: this is for those people who want to tune their apps specifically to known environments.

What’s it like working together?

Peter: Bob is one of the most technically capable program managers (PMs) I have had the privilege of working with.

Bob: We have worked together for so long—over a decade and in three different companies—so there is a natural trust in each other and our abilities. When you are lucky to have that, you don’t have to spend energy and time figuring out how to work together. Instead, you can focus on getting things done. This leaves us more time to really think about the customer rather than the division of labor.

Peter: My team is organized by the areas of technical affinity. I have developers focused on:

  • SDK runtime
  • Computer vision/machine learning
  • Drivers and low-level subsystems
  • Audio
  • Samples and tools

Bob: We have a unique approach to the way we organize our teams: I take a very scenario-driven approach, while Peter takes a technically focused approach. My team is organized into PMs who look holistically across what end users need, versus what commercial customers need, versus what developers need.

Peter: We organize this way intentionally and we believe it’s a best practice that allows us to iterate quickly and successfully!

What was the process you and your teams went through to determine what this SDK release would include, and who is this SDK for?

Bob: This SDK is for every Kinect for Windows developer and anyone who wants to develop with voice and gesture. Seriously, if you’re already using a previous version, there is really no reason not to upgrade. You might have noticed that we gave developers a first version of the SDK in February, then a significant update in May, and now this release. We have designed Kinect for Windows around rapid updates to the SDK; as we roll out new functionality, we test our backwards compatibility very thoroughly, and we ensure no breaking changes.

We are wholeheartedly dedicated to Kinect for Windows. And we’re invested in continuing to release updated iterations of the SDK rapidly for our business and developer customers. I hope the community recognizes that we’re making the SDK easier and easier to use over time and are really listening to their feedback.

Peter Zatloukal, Engineering Manager
Bob Heddle, Group Program Manager
Kinect for Windows

Related Links

Programming With the Kinect for Windows Software Development Kit: Add Gesture and Posture Recognition to Your Applications

Auteur: David Catuhe
Langue: Anglais
ISBN: 0735666814
224 pages
Septembre 2012
Sur Amazon (Broché)
Sur Amazon (version Kindle)

Présentation de l’éditeur (en anglais)

Create rich experiences for users of Windows 7 and Windows 8 Developer Preview with this pragmatic guide to the Kinect for Windows Software Development Kit (SDK). The author, a developer evangelist for Microsoft, walks you through Kinect sensor technology and the SDK—providing hands-on insights for how to add gesture and posture recognition to your apps. If you’re skilled in C# and Windows Presentation Foundation, you’ll learn how to integrate Kinect in your applications and begin writing UIs and controls that can handle Kinect interaction. Lire la suite

Kinect for Windows: SDK and Runtime version 1.5 Released

Kinect for Windows sensorI am pleased to announce that today we have released version 1.5 of the Kinect for Windows runtime and SDK.  Additionally, Kinect for Windows hardware is now available in Hong Kong, South Korea, Singapore, and Taiwan. Starting next month, Kinect for Windows hardware will be available in 15 additional countries: Austria, Belgium, Brazil, Denmark, Finland, India, the Netherlands, Norway, Portugal, Russia, Saudi Arabia, South Africa, Sweden, Switzerland and the United Arab Emirates. When this wave of expansion is complete, Kinect for Windows will be available in 31 countries around the world.  Go to our Kinect for Windows website to find a reseller in your region.

 We have added more capabilities to help developers build amazing applications, including:

  • Kinect Studio, our new tool which allows developers to record and play back Kinect data, dramatically shortening and simplifying the development lifecycle of a Kinect application. Now a developer writing a Kinect for Windows application can record clips of users in the application’s target environment and then replay those clips at a later time for testing and further development.
  • A set of Human Interface Guidelines (HIG) to guide developers on best practices for the creation of Natural User Interfaces using Kinect.
  • The Face Tracking SDK, which provides a real-time 3D mesh of facial features—tracking the head position, location of eyebrows, shape of the mouth, etc.
  • Significant sample code additions and improvements.  There are many new samples in both C++ and C#, plus a “Basics” series of samples with language coverage in C++, C#, and Visual Basic.
  • SDK documentation improvements, including new resources as well as migration of documentation to MSDN for easier discoverability and real-time updates.

 We have continued to expand and improve our skeletal tracking capabilities in this release:

  • Kinect for Windows SDK v1.5 offers 10-joint head/shoulders/arms skeletal trackingSeated Skeletal Tracking is now available. This tracks a 10-joint head/shoulders/arms skeleton, ignoring the leg and hip joints. It is not restricted to seated positions; it also tracks head/shoulders/arms when a person is standing. This makes it possible to create applications that are optimized for seated scenarios (such as office work with productivity software or interacting with 3D data) or standing scenarios in which the lower body isn’t visible to the sensor (such as interacting with a kiosk or when navigating through MRI data in an operating room).
  • Skeletal Tracking is supported in Near Mode, including both Default and Seated tracking modes. This allows businesses and developers to create applications that track skeletal movement at closer proximity, like when the end user is sitting at a desk or needs to stand close to an interactive display.

We have made performance and data quality enhancements, which improve the experience of all Kinect for Windows applications using the RGB camera or needing RGB and depth data to be mapped together (“green screen” applications are a common example):

  • Performance for the mapping of a depth frame to a color frame has been significantly improved, with an average speed increase of 5x.
  • Depth and color frames will now be kept in sync with each other. The Kinect for Windows runtime continuously monitors the depth and color streams and corrects any drift.
  • RGB Image quality has been improved in the RGB 640×480 @30fps and YUV 640×480 @15fps video modes. The image quality is now sharper and more color-accurate in high and low lighting conditions.

New capabilities to enable avatar animation scenarios, which makes it easier for developers to build applications that control a 3D avatar, such as Kinect Sports.

  • Kinect for Windows skeletal tracking is supported in near mode, including both default and seated tracking modesKinect for Windows runtime provides Joint Orientation information for the skeletons tracked by the Skeletal Tracking pipeline.  
  • The Joint Orientation is provided in two forms:  A Hierarchical Rotation based on a bone relationship defined on the Skeletal Tracking joint structure, and an Absolute Orientation in Kinect camera coordinates.

Finally, as I mentioned in my Sneak Peek Blog post, we released four new languages for speech recognition – French, Spanish, Italian, and Japanese. In addition, we released new language packs which enable speech recognition for the way a language is spoken in different regions: English/Great Britain, English/Ireland, English/Australia, English/New Zealand, English/Canada, French/France, French/Canada, Italian/Italy, Japanese/Japan, Spanish/Spain, and Spanish/Mexico.

As we have worked with customers large and small over the past months, we’ve seen the value in having a fully integrated approach: the Kinect software and hardware are designed together; audio, video, and depth are all fully supported and integrated; our sensor, drivers, and software work together to provide world class echo cancellation; our approach to human tracking, which is designed in conjunction with the Kinect sensor, works across a broad range of people of all shapes, sizes, clothes, and hairstyles, etc. And because we design the hardware and software together, we are able to make changes that open up exciting new areas for innovation, like Near Mode.

Furthermore, because Kinect for Windows is from Microsoft, our support, distribution, and partner network are all at a global scale. For example, the Kinect for Windows hardware and software are tested together and supported as a unit in every country we are in (31 countries by June!), and we will continue to add countries over time. Microsoft’s developer tools are world class, and our SDK is built to fully integrate with Visual Studio. Especially important for our global business customers is Microsoft’s ability to connect them to partners and experts who can help them use Kinect for Windows to re-imagine their brands, their products, and their processes.

It is exciting for us to have built and shipped such a significantly enhanced version of the Kinect for Windows SDK less than 16 weeks after launch. But we are even more excited about our plans for the future – both in country expansion for the sensor, and in enhanced capabilities of our runtime and SDK.  We believe the best is yet to come, and we can’t wait to see what developers will build with this!

Craig Eisler
General Manager, Kinect for Windows

 

Kinect for Windows Technologies: Boxing Robots to Limitless Possibilities

Most developers, including myself, are natural tinkerers. We hear of a new technology and want to try it out, exploring what it can do, dream up interesting uses, and pushing the limits of what’s possible. Most recently, the Channel 9 team incorporated Kinect for Windows into two projects: BoxingBots, and Project Detroit.Life-sized BoxingBots are controlled by Kinect for Windows technologies

The life-sized BoxingBots made their debut in early March at SXSW in Austin, Texas. Each robot is equipped with an on-board computer, which receives commands from two Kinect for Windows sensors and computers. The robots are controlled by two individuals whose movements  – punching, rotating, stepping forward and backwards – are interpreted by and relayed back to the robots, who in turn, slug it out, until one is struck and its pneumatic-controlled head springs up.

The use of Kinect for Windows for telepresence applications, like controlling a robot or other mechanical device, opens up a number of interesting possibilities. Imagine a police officer using gestures and word commands to remotely control a robot, exploring a building that may contain explosives. In the same vein, Kinect telepresence applications using robots could be used in the manufacturing, medical, and transportation industries.

Project Detroit’s front and rear Kinect cameras transmit   a live video feed of surrounding pedestrians and objects directly to the interior dashboard displays.Project Detroit asked the question, what do you get when you combine the world’s most innovative technology with a classic American car? The answer is a 2012 Ford Mustang with a 1967 fastback replica body, and everything from Windows Phone integration to built-in WiFI, Viper SmartStart security system, cloud services, augmented reality, Ford SYNC, Xbox-enabled entertainment system, Windows 8 Slate, and Kinect for Windows cameras built into the tail and headlights.

One of the key features we built for Project Detroit was the ability to read Kinect data including a video stream, depth data, skeletal joint data, and audio streams over the network using sockets (available here as an open source project). These capabilites could make it possible to receive an alert on your phone when someone gets too close to your car. You could then switch to a live video/audio stream, via a network from the Kinect, to see what they were doing. Using your phone, you could talk to them, asking  politely that they “look, but not touch.”   

While these technologies may not show up in production cars in the coming months (or years), Kinect for Windows technologies are suited for use in cars for seeing objects such as pedestrians and cyclists behind and in front of vehicles, making it easier to ease into tight parking spots, and enabling built-in electronic devices with the wave of a hand or voice commands.

It’s an exciting time to not only be a developer, but a business, organization or consumer who will have the opportunity to benefit from the evolving uses and limitless possibilities of the Kinect for Windows natural user interface. 

Dan Fernandez
Senior Director, Microsoft Channel 9

Kinect for Windows Solution Leads to the Perfect Fitting Jeans

The Bodymetric Pod is small enough to be used in retail locations for capturing customers’ unique body shapes to virtually to select and purchase garmentsThe writer Mark Twain once said “We are alike, on the inside.” On the outside, however, few people are the same. While two people might be the same height and wear the same size, the way their clothing fits their bodies can vary dramatically. As a result, up to 40% of clothing purchased both online and in person is returned because of poor fit.

Finding the perfect fit so clothing conforms to a person’s unique body shape is at the heart of the Bodymetrics Pod. Developed by Bodymetrics, a London-based pioneer in 3D body-mapping, the Bodymetrics Pod was introduced to American shoppers for the first time today during Women’s Denim Days at Bloomingdale’s in Century City, Los Angeles. This is the first time Kinect for Windows has been used commercially in the United States for body mapping in a retail clothing environment.

Bloomingdale’s, a leader in retail innovation, has one of the largest offerings in premium denim from fashion-forward brands like J Brand, 7 for all mankind, Citizens and Humanity, AG, and Paige. The Bodymetrics services allows customers to get their body mapped and find jeans that fit and flatter their unique shape from the hundreds of different jeans styles that Bloomingdale’s stocks.

During Bloomingdale’s Denim Days, March 15 – 18, customers will be able to get their body mapped, and also become a Bodymetrics member. This free service enables customers to access an online account and order jeans based on their body shape.

The Bodymetric scanning technology maps hundreds of measurements and contours, which can be used for choosing clothing that perfectly fits a person’s body“We’re very excited about bringing Bodymetrics to US shoppers,” explains Suran Goonatilake, CEO of Bodymetrics. “Once we 3D map a customer’s body, we classify their shape into three categories – emerald, sapphire and ruby. A Bodymetrics Stylist will then find jeans that exactly match the body shape of the customer from jean styles that Bloomingdale’s stocks.”

The process starts with a customer creating a Bodymetrics account. They are then directed to the Bodymetrics Pod, a secure, private space, where their body is scanned by 8 Kinect for Windows sensors arranged in a circle. Bodymetrics’ proprietary software produces a 3D map of the customer’s body, and then calculates the shape of the person, taking hundreds of measurements and contours into account. The body-mapping process takes less than 5 seconds.

Helping women shop for best-fitting jeans in department stores is just the start of what Bodymetrics envisions for their body-mapping technologies. The company is working on a solution that can be used at home. Individuals will be able to scan their body, and then go online to select, virtually try on, and purchase clothing that match their body shape.

Goonatilake explains, “Body-mapping is in its infancy. We’re just starting to explore what’s possible in retail stores and at home. Stores are increasingly looking to provide experiences that entice shoppers into their stores, and then allow a seamless journey from stores to online. And we all want shopping experiences that are personalized to us – our size, shape and style.”

Even though people may not be identical on the outside, we desire clothing that fits well and complements our body shapes. The Kinect for Windows-enabled Bodymetrics Pod offers a retail-ready solution that makes the perfect fit beautifully simple.

Kinect for Windows Team