Archives du mot-clé natural user interface

Meetup

Kinect Genius Bar Meetup à la BnF

Le KGB (KinectGeniusBar) squattera ce mercredi 2 juillet à 18h l’espace démonstration du Labo de la BnF (Bibliothèque nationale de France) pour 3 ateliers sur les Interfaces Naturelles et leur impact sur notre quotidien.

  •  Vincent Guigui, expert des nouvelles interactions chez OCTO Technology
  • Antoine Habert, fondateur de Handisense, une société qui édite des solutions d’accessibilité basées sur les interfaces naturelles
  • Greg Madison, designer d’interactions et d’expériences utilisateurs innovantes

Venez nombreux (sondage de participation via le meetup ci-dessous)

Kinect Genius Bar Meetup à la Bibliothèque nationale de France

Quai François Mauriac, Paris,

Google I/O : aménagement d’intérieur et capteur mobile Capri de PrimeSense sur Nexus 10

Techniquement parlant le device est très très petit (un air de Leap Motion) et la démo où l’utilisateur place des objets virtuels dans le décor réel via réalité augmentée est assez bluffante.

Par contre on notera:

  • Unlike the Kinect, however, PrimeSense doesn’t think gestures will play a significant role in how we use Capri to interact with our gadgets
  • In many respects, PrimeSense appears to be taking the same strategy Google does with Glass:get developers excited about the tech in the hopes they’ll come up with clever uses for it
  • Use cases which haven’t even been thought up yet

Pour résumer, le marché voit arriver des composants technologiques de qualité et innovants mais cela reste à nous (développeurs) de trouver des usages pertinents et de révolutionner les usages.

Qu’attendons-nous pour commencer ? 😉

Source: PrimeSense demonstrates Capri 3D sensor on Nexus 10 hands-on.

Les feedbacks dans les interfaces sans contact


L’un des principes de base d’une bonne Expérience Utilisateur est de fournir à l’utilisateur un moyen clair de savoir que ses actions sont prises en compte, c’est la notion de feedback ou rétroaction.

Avec une souris ou un écran tactile, la sensation physique du clic de souris ou du toucher est souvent suffisante. Certaines plateformes tactiles vont même un peu plus loin en fournissant un retour haptique (vibration lors du contact).

Dans les interfaces sans contact ou gestuelles telles que Kinect, il faut se rabattre sur des feedbacks visuels ou sonores (si les conditions d’utilisation le permettent).
Lire la suite

Complémentarité Kinect et Leap Motion

Kinect pour WindowsdifferentLeap Motion

Kinect, Leap Motion, capteurs de mouvement… 2012 et surtout 2013 seront les années des interfaces naturelles.

J’ai un Kinect pour Windows depuis de sa sortie et depuis peu j’ai aussi un Leap Motion. Laissez moi vous faire part de mes retours sur ces 2 périphériques nouvelle génération.

Lire la suite

Design Practices for Touch-free Interactions

Source : New Design Practices for Touch-free Interactions | UX Magazine.

Touch interaction has become practically ubiquitous in developed markets, and that has changed users’ expectations and the way UX practitioners think about human–computer interaction (HCI). Now, touch-free gestures and Natural Language Interaction (NLI) are bleeding into the computing mainstream the way touch did years ago. These will again disrupt UX, from the heuristics that guide us, to our design patternsand deliverables.

Swipe-Spread-Squeeze Gesture

Lire la suite

The Commercial Birth of Natural Computing

source : allthingsd : Leslie Feinzaig, Senior Product Manager, Kinect for Windows

Punch card. Keyboard. Mouse. Touchscreen. Voice. Gesture.

This abbreviated history of human-computer interaction follows a clear trajectory of improvement, where each mode of communication with technology is demonstrably easier to use than the last. We are now entering an era of natural computing, where our interaction with technology becomes more like a conversation, effortless and ordinary, and less like a chore, clunky and difficult. Those of us working in the field are focused on teaching computers to understand and adapt to the most natural human actions, instead of forcing people to learn to understand and adapt to technology.

NaturalComputing

Lire la suite

Meta: des lunettes pour voir et agir dans le monde réel et virtuel

via Meta: des lunettes pour voir et agir dans le monde réel et virtuel « Abavala !!!.

Meta, une nouvelle startup issue de la Columbia University, s’est allié au géant Epson afin de tenter de bousculer le marché naissant de la réalité augmentée.  Ensemble ils ont créé un nouveau dispositif qui prend la forme de lunettes équipées de 2 affichages (un pour chaque œil) et qui est doté de la capacité de suivre le mouvent et le positionnement des mains dans l’espace.  C’est l’union improbable de 2 petits écrans, d’un dispositif de type Kinect et d’un minuscule ordinateur qui se revêt comme des lunettes.
[…]
Comme souvent lorsqu’il s’agit de prototypes, il n’y a pas d’informations concernant le prix ou la disponibilité de ces lunettes d’un nouveau genre.  Tout ce que l’on sait c’est que les lunettes pourraient faire leur apparition prochainement sur le site de financement collaboratif Kickstarter.

Dans la plupart des prototypes d’immersion, la manipulation d’objets virtuels est vraiment problématique car cela nécessite un effort intellectuel important pour l’utilisateur afin de transposer ses mouvements réels (sans contact) par rapport à ce qui est affiché sur l’écran distant (ordi, video-projecteur).

La réalité augmentée via des lunettes ou des hologrammes (Minority Report style) est surement la bonne solution, reste à trouver les solutions techniques pour permettre cela (miniaturisation, précision, résilience aux parasites)

Le produit du MIT SixthSense (SixthSense a wearable gestural interface – MIT Media Lab) contournait le problème en utilisant un couple camera/pico-projecteur pour permettre de manipuler ses données projetées sur n’importe quelle surface (y compris sa propre paume de main)

Kinect for Windows Shopping Solutions Showcased at National Retail Federation Expo

Swivel Close-Up, a Kinect for Windows-based kiosk from FaceCake, lets customers visualize themselves
Swivel Close-Up, a Kinect for Windows-based kiosk from FaceCake, lets customers visualize themselves
in small accessories such as makeup, sunglasses, and jewelry.

Microsoft Kinect for Windows has been playing an increasingly important role in retail, from interactive kiosks at stores such as Build-A-Bear Workshop, to virtual dressing rooms at fashion leaders like Bloomingdale’s, to virtual showrooms at Nissan dealerships. This year’s National Retail Federation (NRF) Convention and Expo, which took place earlier this week, showcased several solutions that provide retailers with new ways to drive customer engagement, sales, and loyalty.

Lire la suite

Inside the Newest Kinect for Windows SDK – Infrared Control

Inside the Newest Kinect for Windows SDK—Infrared ControlThe Kinect for Windows software development kit (SDK) October release was a pivotal update with a number of key improvements. One important update in this release is how control of infrared (IR) sensing capabilities has been enhanced to create a world of new possibilities for developers.

IR sensing is a core feature of the Kinect sensor, but until this newest release, developers were somewhat restrained in how they could use it. The front of the Kinect for Windows sensor has three openings, each housing a core piece of technology. On the left, there is an IR emitter, which transmits a factory calibrated pattern of dots across the room in which the sensor resides. The middle opening is a color camera. The third is the IR camera, which reads the dot pattern and can help the Kinect for Windows system software sense objects and people along with their skeletal tracking data.

Lire la suite

Inside the Kinect for Windows SDK Update with Peter Zatloukal and Bob Heddle

Now that the updated Kinect for Windows SDK  is available for download, Engineering Manager Peter Zatloukal and Group Program Manager Bob Heddle sat down to discuss what this significant update means to developers.

Bob Heddle demonstrates the new infrared functionality in the Kinect for Windows SDK 
Bob Heddle demonstrates the new infrared functionality in the Kinect for Windows SDK.

Why should developers care about this update to the Kinect for Windows Software Development Kit (SDK)?

Bob: Because they can do more stuff and then deploy that stuff on multiple operating systems!

Peter: In general, developers will like the Kinect for Windows SDK because it gives them what I believe is the best tool out there for building applications with gesture and voice.

In the SDK update, you can do more things than you could before, there’s more documentation, plus there’s a specific sample called Basic Interactions that’s a follow-on to our Human Interface Guidelines (HIG). Human Interface Guidelines are a big investment of ours, and will continue to be. First we gave businesses and developers the HIG in May, and now we have this first sample, demonstrating an implementation of the HIG. With it, the Physical Interaction Zone (PhIZ) is exposed. The PhIZ is a component that maps a motion range to the screen size, allowing users to comfortably control the cursor on the screen.

This sample is a bit hidden in the toolkit browser, but everyone should check it out. It embodies best practices that we described in the HIG and is can be re-purposed by developers easily and quickly.

Bob: First we had the HIG, now we have this first sample. And it’s only going to get better. There will be more to come in the future.

Why upgrade?

Bob: There’s no downside to upgrading, so everyone should do it today! There are no breaking changes; it’s fully compatible with previous releases of the SDK, it gives you better operating support reach, there are a lot of new features, and it supports distribution in more countries with localized setup and license agreements. And, of course, China is now part of the equation.

Peter: There are four basic reasons to use the Kinect for Windows SDK and to upgrade to the most recent version:

  • More sensor data are exposed in this release.
  • It’s easier to use than ever (more samples, more documentation).
  • There’s more operating system and tool support (including Windows 8, virtual machine support, Microsoft Visual Studio 2012, and Microsoft .NET Framework 4.5).
  • It supports distribution in more geographical locations. 

What are your top three favorite features in the latest release of the SDK and why?

Peter: If I must limit myself to three, then I’d say the HIG sample (Basic Interactions) is probably my favorite new thing. Secondly, there’s so much more documentation for developers. And last but not least…infrared! I’ve been dying for infrared since the beginning. What do you expect? I’m a developer. Now I can see in the dark!

Bob: My three would be extended-range depth data, color camera settings, and Windows 8 support. Why wouldn’t you want to have the ability to develop for Windows 8? And by giving access to the depth data, we’re giving developers the ability to see beyond 4 meters. Sure, the data out at that range isn’t always pretty, but we’ve taken the guardrails off—we’re letting you go off-roading. Go for it!

New extended-range depth data now provides details beyond 4 meters. These images show the difference between depth data gathered from previous SDKs (left) versus the updated SDK (right). 
New extended-range depth data now provides details beyond 4 meters. These images show the difference between depth data gathered from previous SDKs (left) versus the updated SDK (right).

Peter: Oh yeah, and regarding camera settings, in case it isn’t obvious: this is for those people who want to tune their apps specifically to known environments.

What’s it like working together?

Peter: Bob is one of the most technically capable program managers (PMs) I have had the privilege of working with.

Bob: We have worked together for so long—over a decade and in three different companies—so there is a natural trust in each other and our abilities. When you are lucky to have that, you don’t have to spend energy and time figuring out how to work together. Instead, you can focus on getting things done. This leaves us more time to really think about the customer rather than the division of labor.

Peter: My team is organized by the areas of technical affinity. I have developers focused on:

  • SDK runtime
  • Computer vision/machine learning
  • Drivers and low-level subsystems
  • Audio
  • Samples and tools

Bob: We have a unique approach to the way we organize our teams: I take a very scenario-driven approach, while Peter takes a technically focused approach. My team is organized into PMs who look holistically across what end users need, versus what commercial customers need, versus what developers need.

Peter: We organize this way intentionally and we believe it’s a best practice that allows us to iterate quickly and successfully!

What was the process you and your teams went through to determine what this SDK release would include, and who is this SDK for?

Bob: This SDK is for every Kinect for Windows developer and anyone who wants to develop with voice and gesture. Seriously, if you’re already using a previous version, there is really no reason not to upgrade. You might have noticed that we gave developers a first version of the SDK in February, then a significant update in May, and now this release. We have designed Kinect for Windows around rapid updates to the SDK; as we roll out new functionality, we test our backwards compatibility very thoroughly, and we ensure no breaking changes.

We are wholeheartedly dedicated to Kinect for Windows. And we’re invested in continuing to release updated iterations of the SDK rapidly for our business and developer customers. I hope the community recognizes that we’re making the SDK easier and easier to use over time and are really listening to their feedback.

Peter Zatloukal, Engineering Manager
Bob Heddle, Group Program Manager
Kinect for Windows

Related Links