Contextual computing is a related technology, which will develop in parallel with wearables.
By Andrew Dent
The Cyber Sense
Recent advances in contextual computing can be traced to the development of recommendation engines within online shopping sites, such as Amazon. Recommendation engines provide recommendations for purchases based on information including past purchases, user profile information, and recent searches. The demand for recommendation engines has driven innovation in the big data space, particularly in the area of predictive analytics.
With the advent of the SmartPhone, our personal computing world has gone mobile. This has created the opportunity to feed sensor data from the mobile phone into these recommendations engines. As the form factor of the SmartPhone starts to integrate more tightly with our bodies (i.e. wearable), the on-board sensors provided by these devices will be increasingly linked to smarter cloud based big-data back ends. These systems know our likes and dislikes. Combining this sensor data, with our likes and dislikes, creates the ability to provide feedback to the user which can then be used as input into our reasoning process. In essence, an addition sense – “The Cyber Sense”.
What makes context?
There is a variety of information that can be used to create context. This information can be grouped under a number of categories and this information is shown below in table 1.
|Table 1: Contextual information|
|Spatial Information||Location, Orientation, Speed, Altitude|
|Environmental||Light, Temperature, Humidity|
|Social||Who is with you, who is nearby?|
|Compute Resources||Accessible devices, networks, battery, display|
|Bio Stats||Heart rate, Blood Temperature, Brain Activity|
|Activity Monitoring||Walking, Running, Sitting, Driving|
|Identity||Logon Credentials, User Preference|
|Schedule||Calendar, eMail, IM, Skype activity|
|Mood||Happy, Sad, Frightened|
Where to from here?
GPS location services connected to Big Data back ends are now commonplace in advanced marketing engines. Indeed, Google Now has the ability to recommend restaurants and coffee shops based on the GPS co-ordinates coming out of the phone. This means that marketers can generate targeted location based offers. The sophistication of this form of marketing will only increase over time as new sensors and sensor capability are added.
A great example here is Dual cameras. A Dual cameras system in a phone, creates the capability of depth perception. The can help to provide a lot more contextual information relating to a user’s physical surroundings, greatly improving the usability of navigation services, and existing augmented reality services.
Not only will new advanced sensors be added to SmartPhones, but SmartPhones will be able to better connect to local devices and web services that increase contextual information. When you get in your car, your phone will automatically connect to all the sensors in your car, adding its sensors to your contextual footprint. (You will become like Tony Stark stepping into your Iron Man suit). Advances in display technology, voice recognition, motion detection, and general device performance will shift SmartPhone control from touch to motion and voice. Contextual engines are already beginning to interpret voice streams to provide input to recommendation engines. (e.g. http://www.expectlabs.com/mindmeld/).
When systems and processes start to share this contextual information things start to get really interesting. Consider a large retail shopping chain. A geo-fence in its parking lot could detect your presence and notify you of any irregularities in your ‘normal’ shopping patterns, such as out of stocks. This data can then in turn be used to drive demand patterns and associated supply chain optimization. It’s hot - he is here for a cold drink, order more beer; it’s about to storm, he needs batteries and a raincoat, redirect customer to another store. Such applications of this technology have the potential to drive huge improvements in areas such as customer satisfaction and supply chain optimization.
Sounds Cool. Let’s build it.
Ok, so contextual computing sounds either like Nirvana, or a direct marketing nightmare. Like it or not, it’s here, and growing every day. Clearly, more infrastructure is needed than just the development of improved sensor capabilities on the SmartPhone. Advances in identity, service integration, preference detection, pattern matching, and network optimization will be required on the client side. Although much processing will occur on the SmartPhone there will be a need to efficiently pass data to back end systems for additional intelligence crunching. On the server side advances in Complex Event Process (CEP) systems will efficiently capture and analyze this data using improved machine learning and big-data predictive analysis technologies.
Ok, so what does this all mean? Reading this article to this point it should be clear that new applications architectures will be required to support contextual computing. Context builds on existing notions of application state. That’s interesting for Citrix, as traditional applications run remotely. Context information, like touch and mouse are remoted to the server via HDX. Citrix announced at Synergy the Mobile SDK for Windows Apps that allow a developer to allow virtualized applications to receive additional sensor contextual information from our Mobile Receivers. As we know from virtualization, some parts of an application are better run at the server, and other parts at the client. In order for contextual applications to work well it’s important that the client device is able to make some of its own ‘decisions’ as in many cases it won’t make sense to push all the data to the server. Continued development of HDX technologies that are smart about what happens on the client and what happens on the server are critical to optimal application performance in the contextual world.
From an interface perspective, device proliferation will continue to challenge application delivery. More and more, Android is on a level footing with the Apple devices. With Windows 8 a whole new raft of devices is emerging, and advances in large screen multi touch monitors mean that applications, and the associated contextual services, will need to be delivered to a large range of devices. This will drive up the cost of developing and deploying application, and create an opportunity for Citrix to leverage its ‘run anywhere’ technologies to solve this problem for customers.
The contextual revolution is here and will play a significant role in the next generation of virtualization technologies.
The information provided in this article is subject to change without notice. The facts of this briefing are believed to be correct at the time of publication but cannot be guaranteed. Please note that the findings, conclusions, and recommendations that the Citrix CTO Office delivers are based on information gathered in good faith from both primary and secondary sources, whose accuracy we are not always in a position to guarantee.