Archives par étiquette : News

Inside the Newest Kinect for Windows SDK – Infrared Control

Inside the Newest Kinect for Windows SDK—Infrared ControlThe Kinect for Windows software development kit (SDK) October release was a pivotal update with a number of key improvements. One important update in this release is how control of infrared (IR) sensing capabilities has been enhanced to create a world of new possibilities for developers.

IR sensing is a core feature of the Kinect sensor, but until this newest release, developers were somewhat restrained in how they could use it. The front of the Kinect for Windows sensor has three openings, each housing a core piece of technology. On the left, there is an IR emitter, which transmits a factory calibrated pattern of dots across the room in which the sensor resides. The middle opening is a color camera. The third is the IR camera, which reads the dot pattern and can help the Kinect for Windows system software sense objects and people along with their skeletal tracking data.

Continuer la lecture

Unique Cancer Treatment Center alex’s place Uses Kinect for Windows to Help Put Kids at Ease

Adrian Ruiz plays with an interactive robot during a visit to Alex's Place.A unique clinic for treating children with cancer and blood disorders, alex’s place is designed to be a warm, open, communal space. The center—which is located in Miami, Florida—helps put its patients at ease by engaging them with interactive screens that allow them to be transported into different environments—where they become a friendly teddy bear, frog, or robot and control their character’s movements in real time.

« As soon as they walk in, technology is embracing them, » said Dr. Julio Barredo, chief of pediatric services at alex’s place in The Sylvester Comprehensive Cancer Center, University of Miami Health Systems.

Continuer la lecture

Microsoft Research to ship Kinected Browser JavaScript library for Kinect-enabled websites

Kinected Browser JavaScript library

Dan Liebling and Meredith R. Morris from Microsoft Research will be presenting today at the 2012 ACM International Conference on Interactive Tabletops and Surfaces about their project “Kinected Browser: Depth Camera Interaction for the Web“, a toolkit which enables web developers to detect Kinect gestures and speech using a simple JavaScript library.

With the introduction of Kinect for Windows earlier this year and the increasing adoption of HTML5 and CSS3 browsers which support rich multi-touch events, Dan and Meredith saw an opportunity to expand the developer reach of the Kinect SDK beyond its C# and C++ roots.

Depth-camera based gestures can facilitate interaction with the Web on keyboard-and-mouse-free and/or multi-user technologies, such as large display walls or TV sets. We present a toolkit for bringing such gesture affordances into modern Web browsers using existing Web programming methods. Our framework is designed to enable Web programmers to incrementally add this capability with minimum effort by leveraging Web standard DOM structures and event models.

The team recognizes some existing work by the community of Kinect and browser integrations: Swimbrowser, an app that uses Kinect to “swim” around websites but does not offer any developer endpoints, and DepthJS, a Safari and Chrome plugin that exposes a similar JavaScript library with more focus on high-level gestures.

In contrast to these, Kinected Browser is a low-level approach that only exposes raw sensor (RGB and depth) data. It works by two components, a browser plugin that talks directly Kinect C++ SDK and a JavaScript library which a web developer can include and redistribute on the page. In the future, the browser plugin can also be abstracted to work with a number of depth camera systems.

The benefits of this “versatile” approach is that they support many more exciting scenarios such as a “Web applications using depth data, e.g., to scan physical artifacts and input them into 3D printing services” or even multi-user interactions on a single website enabled by facial recognition and skeleton tracking.

Compared to Xbox game titles or apps, distributing and promoting a website is much easier. It would be fascinating to see if websites becomes a popular platform for new Kinect hacks and games.

The Kinected Browser library should be available for download from the Microsoft Research project site later today.

Personify Live Uses Microsoft Kinect Or Other Depth Cameras For A Video Conferencing Service That Layers Presenters Over Content

Personify - Live

A video conferencing startup called Personify, Inc. (formerly Nuvixa) is launching its first product next week, which will allow users to take advantage of the motion sensing technology in Microsoft Kinect and other depth cameras in order to overlay video of themselves on top of their presentations in real-time. Using the Personify Live software, users can present on top of their content, gesturing to it as they go along. This content can include a slideshow, computer desktop screen, streaming video, or anything else you can run on your computer.

Personify co-founder Sanjay Patel, now a professor at the University of Illinois at Urbana-Champaign, has been involved in the startup industry for years. Most notably, he was CTO at Ageia Technologies, makers of the PhysX physics engine used in game development, acquired by Nvidia in 2008. Following the sale of the company, a colleague of his introduced him to the depth cameras, which he had been researching since 2004.

“They were these big, ‘research-y,’ industrial-looking machines,” says Patel. “But they eventually became the technology that’s inside Kinect.”

Upon seeing these cameras, Patel began to think about their potential use cases. “If we could get these to be cheaper somehow, and smaller, and integrated into a PC environment,” he thought to himself, “we could really improve the way people communicate with video. That was really what the passion of our founding team was,” he adds. After putting together the initial team in 2009, Personify, then Nuvixa, began to develop software to use depth cameras with video conferencing. Microsoft Kinect, of course, was known as Project Natal at the time Personify got off the ground, as Kinect’s technology wasn’t put on the market until November 2010.

The first beta test began in October of last year, offering access to a limited number of pilot users, which include SAP, McKinsey & Company, and LinkedIn, to name a few. Then a freemium product, the company had around 2,000 sign-ups for its tests, and it’s now in the process of converting those users to paying customers. Pricing starts at $20/user. Another package being offered includes 3 months of Personify’s service for free and Asus’ Xtion Pro Live depth camera for $199.

Unlike online meetings and web conferencing today, where video tends to not be as heavily used, sticking the presenter off to the side in a small box, Personify Live is designed to allow its users to better communicate with gestures, eye contact and other human-to-human interactions – more like how the presentation would appear in real world, face-to-face meetings.

Pilot testers have adopted the technology for inside sales, webinars, web app and online demos, training, education (both K-12 and higher ed) and more. However, Personify’s primary focus is on enterprise sales, where the technology works alongside the current in-house solutions including WebEx, GoToMeeting, and Microsoft Lync. It can also work with Join.me, but only if using Personify’s screenshare, not Join.me’s. “Most companies already have a web conferencing suite they use,” says Patel. “We don’t want them to stop using those things. Our product layers on top of those products,” he explains.

Now at team of around 20, split evenly between Illinois and Vietnam, Personify’s other co-founders include Dennis Lin, Minh Do, and Quang Nguyen. The company is also backed by $3.5 million in Series A funding from AMD Ventures, Liberty Global Ventures, Illinois Ventures, and Serra Ventures.

An interesting note about Liberty – they’re the world’s second largest cable operator, based in Europe. The company sees a potential for Personify in the living room, where users could one day video call each other over their TV’s. Instead of Skype or some other web chatting product (like what’s built into Kinect today), Personify could enable the overlay of just the person calling on top of the live TV show or movie being watched. That future isn’t as far off as you may think. Patel says that they should have demos of this by the end of next year, and possibly deployments by 2014.

In addition, he sees this as possibility on mobile by that time, too. Depth cameras on mobile?, we asked. Do you have direct knowledge of that?, we wanted to know.

“I can’t  really start talking about this stuff now,” says Patel. “But it’s happening. That’s all I can say.”



Kinect Fusion Coming to Kinect for Windows

Last week, I had the privilege of giving attendees at the Microsoft event, BUILD 2012, a sneak peek at an unreleased Kinect for Windows tool: Kinect Fusion.

Kinect Fusion was first developed as a research project at the Microsoft Research lab in Cambridge, U.K.  As soon as the Kinect for Windows community saw it, they began asking us to include it in our SDK. Now, I’m happy to report that the Kinect for Windows team is, indeed, working on incorporating it and will have it available in a future release.

In this Kinect Fusion demonstration, a 3-D model of a home office is being created by capturing multiple views of the room and the objects on and around the desk. This tool has many practical applications, including 3-D printing, digital design, augmented reality, and gaming
In this Kinect Fusion demonstration, a 3-D model of a home office is being created by capturing multiple views of the room and the objects on and around the desk. This tool has many practical applications, including 3-D printing, digital design, augmented reality, and gaming.

Kinect Fusion reconstructs a 3-D model of an object or environment by combining a continuous stream of data from the Kinect for Windows sensor. It allows you to capture information about the object or environment being scanned that isn’t viewable from any one perspective. This can be accomplished either by moving the sensor around an object or environment or by moving the object being scanned in front of the sensor.

Onlookers experience the capabilities of Kinect Fusion as a member of the Kinect for Windows team performs a live demo during BUILD 2012. Kinect Fusion takes the incoming depth data from the Kinect for Windows sensor and uses the sequence of frames to build a highly detailed 3-D map of objects or environments.  The tool then averages the readings over hundreds or thousands of frames to achieve more detail than would be possible from just one reading. This allows Kinect Fusion to gather and incorporate data not viewable from any single view point.  Among other things, it enables 3-D object model reconstruction, 3-D augmented reality, and 3-D measurements.  You can imagine the multitude of business scenarios where these would be useful, including 3-D printing, industrial design, body scanning, augmented reality, and gaming.

We look forward to seeing how our developer community and business partners will use the tool.

Chris White
Senior Program Manager, Kinect for Windows

Key Links