Presented at
30C3 (2013),
Dec. 27, 2013, 8:30 p.m.
(60 minutes).
The talk gives an overview about our work of quantifying knowledge acquisition tasks in real-life environments, focusing on reading. We combine several pervasive sensing approaches (computer vision, motion-based activity recognition etc.) to tackle the problem of recognizing and classifying knowledge acquisition tasks with a special focus on reading. We discuss which sensing modalities can be used for digital and offline reading recognition, as well as how to combine them dynamically.
People increasingly track their physical fitness, from step counting over recording sports exercises to monitoring their food intake (e.g. Fitbit, Runkeeper, LooseIt). People are more aware of their routine, let’s them improve their physical life, fostering better eating and exercise habits, decreasing the risk for any obesity related diseases, increasing the quality of life. Physical activity recognition is becoming mainstream.
Traditionally, research in activity recognition has focused on identifying physical tasks performed by the user through setting up elaborate, dedicated sensors in the lab. Yet, in recent years, physical activity recognition has become more mainstream. As industry begins to apply advances suggested by activity recognition research, we see more and more commercial products that help people track their physical fitness, from simple step counting (e.g., Fitbit One, Misfit Shine, Nike Fuelband, Withings Pulse) over sports expertise tracking (e.g. Runkeeper, Strava) to sleep monitoring.
We see also the first smartphone applications giving users an overview about their activities, for example “Human” and “Move”, just to name two. Their tracking abilities are still limited by the battery power of today’s smart phones. Yet, with the just announced M7 Chip in the new iPhone 5s (making it easy to aggregate and interpret sensor data in a power efficient manner), we can expect physical activity tracking will be sooner or later integrated in our smart phones and other everyday appliances.
While people explored the problem of physical activity recognition thoroughly, the ability to detect cognitive activities is an area with many challenges. This exciting new research field, Cognitive Quantified Self, opens up new opportunities at the intersection of wearable computing, machine learning, psychology, and cognitive science. In this talk I focus on tracking reading (the cognitive process of decoding letters, words, and sentences) in a mobile setting using optical eye tracking and occasionally first person vision (a camera worn on the users head).
We want to establish long-term tracking of many mental processes suggesting strategies to optimize mental fitness and cognitive well-being.
Imagine, educators can get real-time feedback about the attention level and learning progress of their students, giving them a good grasp on already understood and potential difficult concepts. Learning material can be redesigned and tailored towards the needs of the individual student, given their reading/learning history, their preferences and life-style.
Content creators can use the quantified mind logs as a basis for improving their works. In which part of the movie are users most attentive? What feelings are conveyed by a particular paragraph in a book convey? In case you are a researcher, wouldn’t you like to know at what sentence a reader looses interest in your grant proposal?
And we finally can tackle more difficult to analyze lifestyle issues, given a large enough user group and good enough sensing. e.g. How do sleep and eating habits influence our attention and learning?
Presenters:
-
Kai Kunze
as Kai
I strive to make technology more accessible and our lives more predictable.
-General love for science, hacking and playing with tech-
I work as an Assistant Professor in Computer Science and Intelligent Systems at Osaka Prefecture University, Japan.
Kai has quite diverse interests. Currently, he invests a lot of time in:
exploring reading activities
activity recognition
human computer interaction
context/activity recognition
cyber-physical systems
embedded systems
social computing
distributed systems
pervasive computing.
Links:
Similar Presentations: