Presented at
31C3 (2014),
Dec. 29, 2014, 9:15 p.m.
(30 minutes).
The talk gives an overview about the emerging field of smart glasses and how they can be used to augment our mind (e.g. how to improve our brain with technology). The talk will focus mostly on how to quantify cognitive tasks in real world environments. I also present a first application scenarios on how to use smart eyewear (e.g. google glass or JINS MEME) for short term memory augmentation and cognitive activity recognition.
Considering the last centuries, major scientific breakthroughs aimed at overcoming our pyhsical limitations (faster transportation, higher buildings,
longer, more comfortable lifes).
Yet, I believe the coming big scientific
breakthroughs will focus on
overcoming our cognitive limitations.
Smart glasses can play a vital role in
1. understanding our cognitive actions and limitations
by quantifying them
2. helping us design interventions to improve our mind.
The talk will focus mostly on the first point,
what kind of cognitve tasks can we track already
with the smart glasses that are available in the
market and what will happen in the near future.
I will discuss application examples for
Google Glass and J!NS MEME. J!NS MEME is the first consumer level device measuring eye movements using electrodes also called Electrooculography (EOG). The MEME glasses not a general computing platform. They can only stream sensor data to a computer (e.g. smart phone, laptop, desktop) using Bluetooth LE. Sensor data includes vertical and horizontal EOG channels and accelerometer + gyroscope data. The runtime of the device is 8 hours enabling long term recording and, more important, long term real-time streaming of eye and head movement. They are unobtrusive and look mostly like normal glasses.
For Google Glass I present an open sensor-logging platform (including the infrared sensor to count eye blinks) and a fast interface to do lifelogging.
We will discuss which eye movements correlate with
brain functions and how this fact can be used
to estimate the cognitive task a user is performing,
from fatigue detection, over reading segmentation
to cognitive workload and the advances to track attention and concentration. Challenges discussed in the talk include how to get ground truth and how to evaluate performance in general.
Presenters:
-
Kai Kunze
I strive to make technology more accessible and our lives more predictable.
-General love for science, hacking and playing with tech-
I work as an Assistant Professor in Computer Science and Intelligent Systems at Osaka Prefecture University, Japan.
Kai has quite diverse interests. Currently, he invests a lot of time in:
exploring reading activities
activity recognition
human computer interaction
context/activity recognition
cyber-physical systems
embedded systems
social computing
distributed systems
pervasive computing.
Links:
Similar Presentations: