Until recently mobile sensing research such
as activity recognition, where people’s activity
(e.g., walking, driving, sitting, talking) is classified
and monitored, required specialized mobile
devices (e.g., the Mobile Sensing Platform
[MSP]) [6] to be fabricated [7]. Mobile sensing
applications had to be manually downloaded,
installed, and hand tuned for each device. User
studies conducted to evaluate new mobile sensing
applications and algorithms were small-scale
because of the expense and complexity of doing
experiments at scale. As a result the research,
which was innovative, gained little momentum
outside a small group of dedicated researchers.
Although the potential of using mobile phones
as a platform for sensing research has been discussed
for a number of years now, in both industrial
[8] and research communities [9, 10], there
has been little or no advancement in the field
until recently.
All that is changing because of a number of
important technological advances. First, the
availability of cheap embedded sensors initially
included in phones to drive the user experience
(e.g., the accelerometer used to change the display
orientation) is changing the landscape of
possible applications. Now phones can be programmed
to support new disruptive sensing
applications such as sharing the user’s real-time
activity with friends on social networks such as
Facebook, keeping track of a person’s carbon
footprint, or monitoring a user’s well being. Second,
smartphones are open and programmable.
In addition to sensing, phones come with computing
and communication resources that offer a
low barrier of entry for third-party programmers
(e.g., undergraduates with little phone programming
experience are developing and shipping
applications). Third, importantly, each phone
vendor now offers an app store allowing developers
to deliver new applications to large populations
of users across the globe, which is
transforming the deployment of new applications,
and allowing the collection and analysis of data
far beyond the scale of what was previously possible.
Fourth, the mobile computing cloud enables
developers to offload mobile services to back-end
servers, providing unprecedented scale and additional
resources for computing on collections of
large-scale sensor data and supporting advanced
features such as persuasive user feedback based
on the analysis of big sensor data.