From Smart to Cognitive Phones

As smartphones get smarter by utilizing new intelligence in the phone and the cloud, they’ll start to understand our life patterns, reason about our health and well-being, help us navigate through our day, and intervene on our behalf. Here, we present various smartphone sensing systems that we’ve built over the years, arguing that, eventually, smartphones will become cognitive. First, however, we look back at how sensing capabilities in phones evolved. now you can sensing NeuralPhone: A Brain to Smartphone Interface

TOWARD COGNITIVE PHONES 

Cognitive phone
Figure. The WalkSafe app (a) offers real-time detection of the front and back views of cars, noting when a car is approaching or moving away from a user on the phone. (b) Each video frame is preprocessed to compensate for the phone tilt.
By pushing intelligence to the phone in the form of classi!cation models, we can infer human behavior and context. We can exploit big data to build more accurate and robust classification systems. Because people carry their phone as they navigate through the day, phones are well situated to go beyond simple inference of classes by building up knowledge of the user’s life patterns and choices. What if a phone could not only build lifelogs but also predict outcomes and assist the user? We argue the next step in the evolution of the phone is the cognitive phone.


It’s easy to imagine that the next generation of mobile health applications will not only track the user’s physical, cognitive, and mental health but also use data analytics and prediction to model trends in the data. Thus, application-speci!c evidence—such as progressive social isolation, inactivity, and sporadic sleep patterns—could help predict the manic and depressive phases of someone suffering from a serious mental illness, such as a bipolar disorder. If the phone could accurately predict this change in health, could it also intervene to help the patient?

Another example relates to using sensor fusion and prediction on the phone. Changes in speech production are one of many physiological changes that happen during stressful situations. We recently developed the StressSense app on a quad-core Android phone,7 which unobtrusively recognizes stressors from the human voice using the smartphone microphone. Microphones, embedded in mobile phones, provide the opportunity to continuously and noninvasively monitor stress levels in real-life situations.


Imagine a cognitive phone capable of fusing StressSense output (that is, a robust classi!cation of stressors) with other phone data such that it could correlate and attribute stressors with people, meetings (from your phone’s calendar), your health (correlations with BeWell), events (deadlines), and places (your manager’s of!ce). Consider, for example, that the phone’s calendar overlays a simple color code representing your stress levels so you can visually understand at a glance what events, people, and places in the past—and thus likely in the future—aren’t good for your mental health. Armed with this knowledge, the cognitive phone could help you avoid stressful situations by, for example, rearranging your calendar to avoid certain people, events, and locations. If your phone could understand your DNA, it might also offer suggestions to improve your overall well-being.

These motivational scenarios align with many of the open challenges in AI. An enduring dif!culty AI researchers face is !guring out how to make systems more "exible, adaptable, and extensible. The development of cognitive phones will require tackling these challenges in the domain of human behavior as well as providing context recognition that works at the population level and throughout a user’s lifetime. For example, in a mobile phone-based sensing app, the human user is always present and hence potentially able to provide helpful input, such as labels for data. However, an intelligent system will use this human resource sparingly and only when the potential information to be gained outweighs the inconvenience of interrupting the user.

Similarly, cognitive phones will seek to intelligently combine information from different sources, not by generic data pooling but by leveraging known relationships between human behavior at the group and individual levels. The phone would require a reasoning framework that considers multiple objectives and makes different types of decisions based on user needs such as whether to intervene (in the case of a patient relapse), offer a suggestion (perhaps reorganizing the user’s calendar based on measured stressors), or taking action (such as ordering and paying for a latte in advance). 

Each app discussed here pushes intelligence to the phone to infer different aspects of human behavior and context. The cellphone’s rapid evolution into the smartphone has been breathtaking; the next evolutionary step should realize the cognitive phone. 

ACKNOWLEDGMENTS

The following faculty, postdocs, and students (listed in alphabetic order) helped develop the smartphone sensing research presented here: Saeed Abdullah, Shahid Ali, Ethan Berke, Giuseppe Cardone, Gokul T Chittaranjan, Antonio Corradi, Afsaneh Doryab, Shane Eisenman, Kristof Fodor, Denise Frauendorfer, Daniel Gatica-Perez, Shaohan Hu, Nic Lane, Mu Lin, Hong Lu, Emiliano Miluzzo, Matthew Mukerjee, Mirco Musolesi, James Oakley, Wei Pan, Michela Papandrea, Mashfiqui Rabbi, Rajeev D.S. Raizada, Andy M. Sarroff, Lorenzo Torresani, Tianyu Wang, Ye Xu, Xiaochao Yang, and Chuang-wen You. 

REFERENCES 
  1. N.D. Lane et al., “A Survey of Mobile Phone Sensing,” IEEE Comm. Magazine, Sept. 2010, pp. 140–150. 
  2. M. Rabbi et al., “Passive and In-situ Assessment of Mental and Physical Well-Being Using Mobile Sensors,” Proc. 13th ACM Int’l Conf. Ubiquitous Computing (Ubicomp 11), ACM, 2011, pp. 385–394.
by: Meyda Azzahra