Archives du mot-clé kinect sensor

The New Generation Kinect for Windows Sensor is Coming Next Year

The all-new active-infrared capabilities allow the new sensor to work in nearly any lighting condition. This makes it possible for developers to build apps with enhanced recognition of facial features, hand position, and more.By now, most of you likely have heard about the new Kinect sensor that Microsoft will deliver as part of Xbox One later this year. 

Today, I am pleased to announce that Microsoft will also deliver a new generation Kinect for Windows sensor next year. We’re continuing our commitment to equipping businesses and organizations with the latest natural technology from Microsoft so that they, in turn, can develop and deploy innovative touch-free applications for their businesses and customers. A new Kinect for Windows sensor and software development kit (SDK) are core to that commitment.

Both the new Kinect sensor and the new Kinect for Windows sensor are being built on a shared set of technologies. Just as the new Kinect sensor will bring opportunities for revolutionizing gaming and entertainment, the new Kinect for Windows sensor will revolutionize computing experiences. The precision and intuitive responsiveness that the new platform provides will accelerate the development of voice and gesture experiences on computers.

Some of the key capabilities of the new Kinect sensor include:

  • Higher fidelity
    The new sensor includes a high-definition (HD) color camera as well as a new noise-isolating multi-microphone array that filters ambient sounds to recognize natural speaking voices even in crowded rooms. Also included is Microsoft’s proprietary Time-of-Flight technology, which measures the time it takes individual photons to rebound off an object or person to create unprecedented accuracy and precision. All of this means that the new sensor recognizes precise motions and details, such as slight wrist rotation, body position, and even the wrinkles in your clothes. The Kinect for Windows community will benefit from the sensor’s enhanced fidelity, which will allow developers to create highly accurate solutions that see a person’s form better than ever, track objects and environments with greater detail, and understand voice commands in noisier settings than before.

The enhanced fidelity and depth perception of the new Kinect sensor will allow developers to create apps that see a person's form better, track objects with greater detail, and understand voice commands in noisier settings.
The enhanced fidelity and depth perception of the new Kinect sensor will allow developers to
create apps that see a person's form better, track objects with greater detail, and understand
voice commands in noisier settings.

  • Expanded field of view
    The expanded field of view accommodates a multitude of differently sized rooms, minimizing the need to modify existing room configurations and opening up new solution-development opportunities. The combination of the new sensor’s higher fidelity plus expanded field of view will give businesses the tools they need to create truly untethered, natural computing experiences such as clicker-free presentation scenarios, more dynamic simulation and training solutions, up-close interactions, more fluid gesture recognition for quick interactions on the go, and much more.
        
  • Improved skeletal tracking
    The new sensor tracks more points on the human body than previously, including the tip of the hand and thumb, and tracks six skeletons at once. This not only yields more accurate skeletal tracking, it opens up a range of new scenarios, including improved “avateering,” the ability to develop enhanced rehabilitation and physical fitness solutions, and the possibility to create new experiences in public spaces—such as retail—where multiple users can participate simultaneously.

The new sensor tracks more points on the human body than previously and tracks six skeletons at once, opening a range of new scenarios, from improved "avateering" to experiences in which multiple users can participate simultaneously.
The new sensor tracks more points on the human body than previously, including the tip of the hand
and thumb, and tracks six skeletons at once. This opens up a range of new scenarios, from improved
« avateering » to experiences in which multiple users can participate simultaneously.
  

  • New active infrared (IR)
    The all-new active-IR capabilities allow the new sensor to work in nearly any lighting condition and, in essence, give businesses access to a new fourth sensor: audio, depth, color…and now active IR. This will offer developers better built-in recognition capabilities in different real-world settings—independent of the lighting conditions—including the sensor’s ability to recognize facial features, hand position, and more. 

I’m sure many of you want to know more. Stay tuned; at BUILD 2013 in June, we’ll share details about how developers and designers can begin to prepare to adopt these new technologies so that their apps and experiences are ready for general availability next year.

A new Kinect for Windows era is coming: an era of unprecedented responsiveness and precision.

Bob Heddle
Director, Kinect for Windows

Key links

 
Photos in this blog by STEPHEN BRASHEAR/Invision for Microsoft/AP Images

 

Inside the Newest Kinect for Windows SDK – Infrared Control

Inside the Newest Kinect for Windows SDK—Infrared ControlThe Kinect for Windows software development kit (SDK) October release was a pivotal update with a number of key improvements. One important update in this release is how control of infrared (IR) sensing capabilities has been enhanced to create a world of new possibilities for developers.

IR sensing is a core feature of the Kinect sensor, but until this newest release, developers were somewhat restrained in how they could use it. The front of the Kinect for Windows sensor has three openings, each housing a core piece of technology. On the left, there is an IR emitter, which transmits a factory calibrated pattern of dots across the room in which the sensor resides. The middle opening is a color camera. The third is the IR camera, which reads the dot pattern and can help the Kinect for Windows system software sense objects and people along with their skeletal tracking data.

Lire la suite

Kinect for Windows: SDK and Runtime version 1.5 Released

Kinect for Windows sensorI am pleased to announce that today we have released version 1.5 of the Kinect for Windows runtime and SDK.  Additionally, Kinect for Windows hardware is now available in Hong Kong, South Korea, Singapore, and Taiwan. Starting next month, Kinect for Windows hardware will be available in 15 additional countries: Austria, Belgium, Brazil, Denmark, Finland, India, the Netherlands, Norway, Portugal, Russia, Saudi Arabia, South Africa, Sweden, Switzerland and the United Arab Emirates. When this wave of expansion is complete, Kinect for Windows will be available in 31 countries around the world.  Go to our Kinect for Windows website to find a reseller in your region.

 We have added more capabilities to help developers build amazing applications, including:

  • Kinect Studio, our new tool which allows developers to record and play back Kinect data, dramatically shortening and simplifying the development lifecycle of a Kinect application. Now a developer writing a Kinect for Windows application can record clips of users in the application’s target environment and then replay those clips at a later time for testing and further development.
  • A set of Human Interface Guidelines (HIG) to guide developers on best practices for the creation of Natural User Interfaces using Kinect.
  • The Face Tracking SDK, which provides a real-time 3D mesh of facial features—tracking the head position, location of eyebrows, shape of the mouth, etc.
  • Significant sample code additions and improvements.  There are many new samples in both C++ and C#, plus a “Basics” series of samples with language coverage in C++, C#, and Visual Basic.
  • SDK documentation improvements, including new resources as well as migration of documentation to MSDN for easier discoverability and real-time updates.

 We have continued to expand and improve our skeletal tracking capabilities in this release:

  • Kinect for Windows SDK v1.5 offers 10-joint head/shoulders/arms skeletal trackingSeated Skeletal Tracking is now available. This tracks a 10-joint head/shoulders/arms skeleton, ignoring the leg and hip joints. It is not restricted to seated positions; it also tracks head/shoulders/arms when a person is standing. This makes it possible to create applications that are optimized for seated scenarios (such as office work with productivity software or interacting with 3D data) or standing scenarios in which the lower body isn’t visible to the sensor (such as interacting with a kiosk or when navigating through MRI data in an operating room).
  • Skeletal Tracking is supported in Near Mode, including both Default and Seated tracking modes. This allows businesses and developers to create applications that track skeletal movement at closer proximity, like when the end user is sitting at a desk or needs to stand close to an interactive display.

We have made performance and data quality enhancements, which improve the experience of all Kinect for Windows applications using the RGB camera or needing RGB and depth data to be mapped together (“green screen” applications are a common example):

  • Performance for the mapping of a depth frame to a color frame has been significantly improved, with an average speed increase of 5x.
  • Depth and color frames will now be kept in sync with each other. The Kinect for Windows runtime continuously monitors the depth and color streams and corrects any drift.
  • RGB Image quality has been improved in the RGB 640×480 @30fps and YUV 640×480 @15fps video modes. The image quality is now sharper and more color-accurate in high and low lighting conditions.

New capabilities to enable avatar animation scenarios, which makes it easier for developers to build applications that control a 3D avatar, such as Kinect Sports.

  • Kinect for Windows skeletal tracking is supported in near mode, including both default and seated tracking modesKinect for Windows runtime provides Joint Orientation information for the skeletons tracked by the Skeletal Tracking pipeline.  
  • The Joint Orientation is provided in two forms:  A Hierarchical Rotation based on a bone relationship defined on the Skeletal Tracking joint structure, and an Absolute Orientation in Kinect camera coordinates.

Finally, as I mentioned in my Sneak Peek Blog post, we released four new languages for speech recognition – French, Spanish, Italian, and Japanese. In addition, we released new language packs which enable speech recognition for the way a language is spoken in different regions: English/Great Britain, English/Ireland, English/Australia, English/New Zealand, English/Canada, French/France, French/Canada, Italian/Italy, Japanese/Japan, Spanish/Spain, and Spanish/Mexico.

As we have worked with customers large and small over the past months, we’ve seen the value in having a fully integrated approach: the Kinect software and hardware are designed together; audio, video, and depth are all fully supported and integrated; our sensor, drivers, and software work together to provide world class echo cancellation; our approach to human tracking, which is designed in conjunction with the Kinect sensor, works across a broad range of people of all shapes, sizes, clothes, and hairstyles, etc. And because we design the hardware and software together, we are able to make changes that open up exciting new areas for innovation, like Near Mode.

Furthermore, because Kinect for Windows is from Microsoft, our support, distribution, and partner network are all at a global scale. For example, the Kinect for Windows hardware and software are tested together and supported as a unit in every country we are in (31 countries by June!), and we will continue to add countries over time. Microsoft’s developer tools are world class, and our SDK is built to fully integrate with Visual Studio. Especially important for our global business customers is Microsoft’s ability to connect them to partners and experts who can help them use Kinect for Windows to re-imagine their brands, their products, and their processes.

It is exciting for us to have built and shipped such a significantly enhanced version of the Kinect for Windows SDK less than 16 weeks after launch. But we are even more excited about our plans for the future – both in country expansion for the sensor, and in enhanced capabilities of our runtime and SDK.  We believe the best is yet to come, and we can’t wait to see what developers will build with this!

Craig Eisler
General Manager, Kinect for Windows

 

Kinect for Windows Technologies: Boxing Robots to Limitless Possibilities

Most developers, including myself, are natural tinkerers. We hear of a new technology and want to try it out, exploring what it can do, dream up interesting uses, and pushing the limits of what’s possible. Most recently, the Channel 9 team incorporated Kinect for Windows into two projects: BoxingBots, and Project Detroit.Life-sized BoxingBots are controlled by Kinect for Windows technologies

The life-sized BoxingBots made their debut in early March at SXSW in Austin, Texas. Each robot is equipped with an on-board computer, which receives commands from two Kinect for Windows sensors and computers. The robots are controlled by two individuals whose movements  – punching, rotating, stepping forward and backwards – are interpreted by and relayed back to the robots, who in turn, slug it out, until one is struck and its pneumatic-controlled head springs up.

The use of Kinect for Windows for telepresence applications, like controlling a robot or other mechanical device, opens up a number of interesting possibilities. Imagine a police officer using gestures and word commands to remotely control a robot, exploring a building that may contain explosives. In the same vein, Kinect telepresence applications using robots could be used in the manufacturing, medical, and transportation industries.

Project Detroit’s front and rear Kinect cameras transmit   a live video feed of surrounding pedestrians and objects directly to the interior dashboard displays.Project Detroit asked the question, what do you get when you combine the world’s most innovative technology with a classic American car? The answer is a 2012 Ford Mustang with a 1967 fastback replica body, and everything from Windows Phone integration to built-in WiFI, Viper SmartStart security system, cloud services, augmented reality, Ford SYNC, Xbox-enabled entertainment system, Windows 8 Slate, and Kinect for Windows cameras built into the tail and headlights.

One of the key features we built for Project Detroit was the ability to read Kinect data including a video stream, depth data, skeletal joint data, and audio streams over the network using sockets (available here as an open source project). These capabilites could make it possible to receive an alert on your phone when someone gets too close to your car. You could then switch to a live video/audio stream, via a network from the Kinect, to see what they were doing. Using your phone, you could talk to them, asking  politely that they “look, but not touch.”   

While these technologies may not show up in production cars in the coming months (or years), Kinect for Windows technologies are suited for use in cars for seeing objects such as pedestrians and cyclists behind and in front of vehicles, making it easier to ease into tight parking spots, and enabling built-in electronic devices with the wave of a hand or voice commands.

It’s an exciting time to not only be a developer, but a business, organization or consumer who will have the opportunity to benefit from the evolving uses and limitless possibilities of the Kinect for Windows natural user interface. 

Dan Fernandez
Senior Director, Microsoft Channel 9

Kinect for Windows Solution Leads to the Perfect Fitting Jeans

The Bodymetric Pod is small enough to be used in retail locations for capturing customers’ unique body shapes to virtually to select and purchase garmentsThe writer Mark Twain once said “We are alike, on the inside.” On the outside, however, few people are the same. While two people might be the same height and wear the same size, the way their clothing fits their bodies can vary dramatically. As a result, up to 40% of clothing purchased both online and in person is returned because of poor fit.

Finding the perfect fit so clothing conforms to a person’s unique body shape is at the heart of the Bodymetrics Pod. Developed by Bodymetrics, a London-based pioneer in 3D body-mapping, the Bodymetrics Pod was introduced to American shoppers for the first time today during Women’s Denim Days at Bloomingdale’s in Century City, Los Angeles. This is the first time Kinect for Windows has been used commercially in the United States for body mapping in a retail clothing environment.

Bloomingdale’s, a leader in retail innovation, has one of the largest offerings in premium denim from fashion-forward brands like J Brand, 7 for all mankind, Citizens and Humanity, AG, and Paige. The Bodymetrics services allows customers to get their body mapped and find jeans that fit and flatter their unique shape from the hundreds of different jeans styles that Bloomingdale’s stocks.

During Bloomingdale’s Denim Days, March 15 – 18, customers will be able to get their body mapped, and also become a Bodymetrics member. This free service enables customers to access an online account and order jeans based on their body shape.

The Bodymetric scanning technology maps hundreds of measurements and contours, which can be used for choosing clothing that perfectly fits a person’s body“We’re very excited about bringing Bodymetrics to US shoppers,” explains Suran Goonatilake, CEO of Bodymetrics. “Once we 3D map a customer’s body, we classify their shape into three categories – emerald, sapphire and ruby. A Bodymetrics Stylist will then find jeans that exactly match the body shape of the customer from jean styles that Bloomingdale’s stocks.”

The process starts with a customer creating a Bodymetrics account. They are then directed to the Bodymetrics Pod, a secure, private space, where their body is scanned by 8 Kinect for Windows sensors arranged in a circle. Bodymetrics’ proprietary software produces a 3D map of the customer’s body, and then calculates the shape of the person, taking hundreds of measurements and contours into account. The body-mapping process takes less than 5 seconds.

Helping women shop for best-fitting jeans in department stores is just the start of what Bodymetrics envisions for their body-mapping technologies. The company is working on a solution that can be used at home. Individuals will be able to scan their body, and then go online to select, virtually try on, and purchase clothing that match their body shape.

Goonatilake explains, “Body-mapping is in its infancy. We’re just starting to explore what’s possible in retail stores and at home. Stores are increasingly looking to provide experiences that entice shoppers into their stores, and then allow a seamless journey from stores to online. And we all want shopping experiences that are personalized to us – our size, shape and style.”

Even though people may not be identical on the outside, we desire clothing that fits well and complements our body shapes. The Kinect for Windows-enabled Bodymetrics Pod offers a retail-ready solution that makes the perfect fit beautifully simple.

Kinect for Windows Team