Archives du mot-clé blog

Des nouveautés pour les développeurs Kinect

L’équipe Kinect for Windows (« K4W ») à Microsoft Corp vient de diffuser plusieurs informations qui intéresseront directement les développeurs :

Tout d’abord, l’ensemble des exemples de code fournis avec le SDK sont dorénavant disponibles sur CodePlex sous licence Apache 2.0. Pour le moment, le projet n’accepte pas de contributions externes, mais c’est un point qui pourrait évoluer à l’avenir…
Pour en savoir plus : http://kinectforwindows.codeplex.com/

K4Wcodeplex

Ensuite, l’équipe met en place un nouveau blog dédié aux développeurs. C’est donc bien deux blogs complémentaires qui vont dorénavant cohabiter :

– Le blog K4W reste dédié aux usages
Son adresse : http://blogs.msdn.com/b/kinectforwindows

– Le blog K4WDev, qui vient donc d’être lancé, va quant à lui se focaliser sur le développement : partage d’informations techniques, trucs et astuces, exemples de codes, … sont au programme :
Son adresse : http://blogs.msdn.com/b/k4wdev/

Enfin, on note l’apparition d’un nouveau hastag Twitter pour suivre les sujets liés au développement : #k4wdev. Pour rappel, le compte Twitter officiel pour Kinect pour WIndows est @KinectWindows

Sur ce, bon dev !

Microsoft Research to ship Kinected Browser JavaScript library for Kinect-enabled websites

Kinected Browser JavaScript library

Dan Liebling and Meredith R. Morris from Microsoft Research will be presenting today at the 2012 ACM International Conference on Interactive Tabletops and Surfaces about their project “Kinected Browser: Depth Camera Interaction for the Web“, a toolkit which enables web developers to detect Kinect gestures and speech using a simple JavaScript library.

With the introduction of Kinect for Windows earlier this year and the increasing adoption of HTML5 and CSS3 browsers which support rich multi-touch events, Dan and Meredith saw an opportunity to expand the developer reach of the Kinect SDK beyond its C# and C++ roots.

Depth-camera based gestures can facilitate interaction with the Web on keyboard-and-mouse-free and/or multi-user technologies, such as large display walls or TV sets. We present a toolkit for bringing such gesture affordances into modern Web browsers using existing Web programming methods. Our framework is designed to enable Web programmers to incrementally add this capability with minimum effort by leveraging Web standard DOM structures and event models.

The team recognizes some existing work by the community of Kinect and browser integrations: Swimbrowser, an app that uses Kinect to “swim” around websites but does not offer any developer endpoints, and DepthJS, a Safari and Chrome plugin that exposes a similar JavaScript library with more focus on high-level gestures.

In contrast to these, Kinected Browser is a low-level approach that only exposes raw sensor (RGB and depth) data. It works by two components, a browser plugin that talks directly Kinect C++ SDK and a JavaScript library which a web developer can include and redistribute on the page. In the future, the browser plugin can also be abstracted to work with a number of depth camera systems.

The benefits of this “versatile” approach is that they support many more exciting scenarios such as a “Web applications using depth data, e.g., to scan physical artifacts and input them into 3D printing services” or even multi-user interactions on a single website enabled by facial recognition and skeleton tracking.

Compared to Xbox game titles or apps, distributing and promoting a website is much easier. It would be fascinating to see if websites becomes a popular platform for new Kinect hacks and games.

The Kinected Browser library should be available for download from the Microsoft Research project site later today.

Kinect scanning becomes serious business with third-party KinectFusion alternatives

Photo credit Tony Buser from Flickr (links below)

Ever since Microsoft Research showed off the extremely impressive KinectFusion demo and shortly after published its research findings, developers have been trying hard to replicate the mad science to give more people the ability to turn everyday objects into 3D models with just a Kinect sensor.

Today Greg Duncan on Channel9′s Coding4Fun posted a video to ReconstructMe, one of the better implementations of such a third-party software that comes very close to what KinectFusion is able to achieve, and then some.

Among the alternatives including PointCloud’s open-source library and Materix 3Dify, ReconstructMe seems to produce the best results.

Currently free for non-commercial use, ReconstructMe allows anyone to take a Kinect sensor connect to a Windows PC and freely roam around objects to generate industry-standard 3D models that can even be printed straight by a 3D printer. And one 3D-print hobbyist, Tony Buser, has done just that.

In fact, Tony is so enthusiastic about Kinect scanning he’s even developed two mods to the Kinect that specifically improve its practical scanning use.

The first is a hand-held attachment that allows him to hold the Kinect like a barcode scanner. And the second, the an ingenious one, is attaching a pair of reading glasses that overcome Kinect’s close-proximity flaw to enable more detailed scanning of smaller objects with impressive results.

As I’m sure the software will improve with time and more people have access to 3D printers, we’ll see more and more practical hobbyists explore the scanning capabilities of the Kinect to make home replication of popular objects a practical and affordable reality.