Context awareness

From HandWiki
Revision as of 15:31, 6 February 2024 by LinuxGuru (talk | contribs) (update)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Context awareness refers, in information and communication technologies, to a capability to take into account the situation of entities,[1] which may be users or devices, but are not limited to those. Location is only the most obvious element of this situation. Narrowly defined for mobile devices, context awareness does thus generalize location awareness. Whereas location may determine how certain processes around a contributing device operate, context may be applied more flexibly with mobile users, especially with users of smart phones. Context awareness originated as a term from ubiquitous computing or as so-called pervasive computing which sought to deal with linking changes in the environment with computer systems, which are otherwise static. The term has also been applied to business theory in relation to contextual application design and business process management issues.[2]

Qualities of context

Various categorizations of context have been proposed in the past. Dey and Abowd (1999)[3] distinguish between the context types location, identity, activity and time. Kaltz et al. (2005)[4] identified the categories user&role, process&task, location, time and device to cover a broad variety of mobile and web scenarios. They emphasize yet for these classical modalities that any optimal categorization depends very much on the application domain and use case. Beyond more advanced modalities may apply when not only single entities are addressed, but also clusters of entities that work in a coherence of context, as e.g. teams at work or also single bearers with a multiplicity of appliances.

Some classical understanding of context in business processes is derived from the definition of AAA applications[5] with the following three categories:

  • Authentication, which means i.e. confirmation of stated identity
  • Authorisation, which means i.e. allowance to accrual or access to location, function, data
  • Accounting, which means i.e. the relation to order context and to accounts for applied labour, granted license, and delivered goods,

these three terms including additionally location and time as stated.

Computer science

In computer science context awareness refers to the idea that computers can both sense, and react based on their environment. Devices may have information about the circumstances under which they are able to operate and based on rules, or an intelligent stimulus, react accordingly. The term context awareness in ubiquitous computing was introduced by Schilit (1994).[6][7] Context-aware devices may also try to make assumptions about the user's current situation. Dey (2001) define context as "any information that can be used to characterize the situation of an entity."[1]

While the computer science community initially perceived the context as a matter of user location, as Dey discuss,[1] in the last few years this notion has been considered not simply as a state, but part of a process in which users are involved; thus, sophisticated and general context models have been proposed (see survey[8]), to support context-aware applications which use them to (a) adapt interfaces, (b) tailor the set of application-relevant data, (c) increase the precision of information retrieval, (d) discover services, (e) make the user interaction implicit, or (f) build smart environments. For example: a context-aware mobile phone may know that it is currently in the meeting room, and that the user has sat down. The phone may conclude that the user is currently in a meeting and reject any unimportant calls.[9]

Context-aware systems are concerned with the acquisition of context (e.g. using sensors to perceive a situation), the abstraction and understanding of context (e.g. matching a perceived sensory stimulus to a context), and application behaviour based on the recognized context (e.g. triggering actions based on context).[10] As the user's activity and location are crucial for many applications, context awareness has been focused more deeply in the research fields of location awareness and activity recognition.

Context awareness is regarded as an enabling technology for ubiquitous computing systems. Context awareness is used to design innovative user interfaces, and is often used as a part of ubiquitous and wearable computing. It is also beginning to be felt in the internet with the advent of hybrid search engines. Schmidt, Beigl and Gellersen[11] define human factors and physical environment as two important aspects relating to computer science. More recently, much work has also been done to ease the distribution of context information; Bellavista, Corradi, Fanelli and Foschini survey[12] the several middleware solutions that have been designed to transparently implement context management and provisioning in the mobile system. Grifoni, D'Ulizia and Ferri[13] provided a review of several context-aware location-based service systems using big data by analysing the methodological and practical choices that their developers made during the main phases of the context awareness process (i.e. context acquisition, context representation, and context reasoning and adaptation). Perera, Zaslavsky, Christen, and Georgakopoulos[14] have performed a comprehensive survey on context-aware computing from Internet of Things perspective by reviewing over 50 leading projects in the field. Further, Perera has also surveyed a large number of industrial products in the existing IoT marketplace from context-aware computing perspective.[15] Their survey is intended to serve as a guideline and a conceptual framework for context-aware product development and research in the IoT paradigm. The evaluation has been done using the theoretical framework developed by Dey and Abowd (1999)[3] more than a decade ago. The combination of the Internet and emerging technologies transform everyday objects into smart objects that can understand and react to their contexts.[16]

Human factors related context is structured into three categories: information on the user (knowledge of habits, emotional state, biophysiological conditions), the user's social environment (co-location of others, social interaction, group dynamics), and the user's tasks (spontaneous activity, engaged tasks, general goals). Likewise, context related to physical environment is structured into three categories: location (absolute position, relative position, co-location), infrastructure (surrounding resources for computation, communication, task performance), and physical conditions (noise, light, pressure, air quality).[17][18]

Relational context: dynamic and non-user-centric definitions

Whereas early definitions of context tended to center on users, or devices interfaced directly with users, the oft-cited definition from Dey[1] ("any information that can be used to characterize the situation of an entity") could be taken without this restriction. User-centric context, as may be used in the design of human-computer interfaces, may also imply an overly clearcut, and partially arbitrary, separation between "content" (anything which is explicitly typed in by users, or output to them), and context, which is implicit, and used for adaptation purposes. A more dynamic and de-centered view, advocated by Dourish [19] views context as primarily relational. This was originally congruent with the move from desktop computing to ubiquitous computing, but it does also fit with a broader understanding of ambient intelligence where the distinctions between context and content become relative and dynamic.[20] In this view, whichever sources of information (such as IoT sensors) may be context for some uses and applications, might also be sources of primary content for others, and vice versa. What matters is the set of relationships that link them, together and with their environment. Whereas early descriptions of single-user-centric context could fit with classical entity-attribute-value models, more versatile graph-based information models, such as proposed with NGSI-LD, are better adapted to capture the more relational view of context which is relevant for the Internet of Things, Cyber-Physical Systems and Digital Twins. In this broader acceptation, context is not only represented as a set of attributes attached to an entity, it is also captured by a graph that enmeshes this entity with others. Context awareness is the capability to account for this cross-cutting information from different sources.

Applications in situational or social awareness

Context awareness has been applied to the area of computer-supported cooperative work (CSCW) to help individuals work and collaborate more efficiently with each other. Since the early 1990s, researchers have developed a large number of software and hardware systems that can collect contextual information (e.g., location, video feeds, away status messages) from users. This information is then openly shared with other users, thereby improving their situational awareness, and allowing them to identify natural opportunities to interact with each other. In the early days of context-aware computing, many of the systems developed for this purpose were specifically designed to assist businesses or geographically separated work teams collaborate on shared documents or work artifacts. More recently, however, there has been a growing body of work that demonstrates how this technique can also be applied to groups of friends or family members to help keep them apprised of each other's activities.

To date, systems that use context awareness to improve situational awareness can be characterised by:

  • the context(s) that they collect from each user, and
  • the method by which they convey this information to other users

The most common context to obtain and share for the purposes of improving situational awareness is the user's location. In an early prototype, the Active Badge system,[21] for example, each user had a uniquely identifying badge that could be tracked via a series of overhead infrared sensors. As users walked throughout a building, their location was constantly monitored by a centralized server. Other users could then view this information (either in text form, or on a map, as was done in later work[22]) to determine if a user is in her office, thereby allowing them to determine the best time to stop by for an unplanned conversation. Location was also shared in the PeopleTones,[23] Serendipity,[24] and the Group Interaction Support Systems[25] to help users determine when they are near friends, users with shared personal interests, and teammates, respectively. In comparison with Active Badge, which only displays location information, these systems are more proactive, and will alert the users when they are in proximity of each other. This lets the user know when a potential interaction opportunity is available, thereby increasing his/her chances of taking advantage of it.

Another popular context to share is a user's work activity, often by sharing video. In the Community Bar system,[26] researchers developed a desktop application that periodically took screenshots of the user's display. This information was then shared with the user's co-workers so that they could know what documents/artifacts their teammates was working on, and provided a common frame of reference so that users could n talk about these artifacts as if they were collocated. In Montage,[27] users are given the ability to remotely activate the webcam on another user's computer for a brief amount of time. This capability to "glance" at another user lets users see if they are busy or preoccupied, which in turn helps them better determine the most opportune time to initiate a conversation.

A third type of context to share to improve or enhance situational awareness is the user's audio. In the Thunderwire system,[28] researchers developed an audio-only media space that allowed friends to share raw audio from their mobile devices' microphones. This system, which in essence was a perpetual conference-call, allowed users to listen to other users' audio in order to determine if and when they were participating in a conversation. The WatchMe[29] and ListenIn[30] systems also rely heavily on audio in order to determine if and when a user was potentially interruptible. Unlike Thunderwire, however, these systems rely on machine learning algorithms in order to analyze the user's audio and determine if the user is talking. This allows the system to provide other users with the same context (i.e., whether or not the user is in a conversation) without having to share the actual audio, thereby making it more privacy centric.

A fourth type of context that is commonly shared is the user's overall activity. In the Hubbub[31] and ConchatCite error: Closing </ref> missing for <ref> tag are a best suited host implementing any context-aware applications. Modern integrated voice and data communications equips the hospital staff with smart phones to communicate vocally with each other, but preferably to look up the next task to be executed and to capture the next report to be noted.

However, all attempts to support staff with such approaches are hampered till failure of acceptance with the need to look up upon a new event for patient identities, order lists and work schedules. Hence a well suited solution has to get rid of such manual interaction with a tiny screen and therefore serves the user with

  • automated identifying actual patient and local environment upon approach,
  • automated recording the events with coming to and leaving off the actual patient,
  • automated presentation of the orders or service due on the current location and with
  • supported documentation to provide such qualities for EHR.

Applications in industrial production

Context-aware mobile agents are a well suited host implementing newer context-aware applications in relation to the new paradigm with industry 4.0. Modern integrated (voice and) data communications equips the workshop or production line staff with smart phones to communicate data with production control for feedback, where data originates from detecting and identifying components and parts to get integrated in flexible production management for on-demand products.

However, all attempts to support staff with such approaches are hampered by fixed production schedules unless the information for customer demand and product configuration can be matched with parts supply. Hence a well suited solution has to get rid of missing interaction of production plan and production line occurrence of relevant information and material by means of

  • automated identifying actually available parts delivered from stock or out of buffer supplies,
  • automated presenting of the integration requirements for on-demand configuration,
  • automated detecting and reporting of the actually mounted configuration

The key requirement is to implement a solution free from manual interaction of worker with information handling. Otherwise the error rate will rise with the rise in information requirements.

Additionally, none of the conventional RFID, WLAN or RTLS locating solutions advertising for most precise locating serve the required quality, as determining a location in conventional attitude looking for absolute coordinates fails either technically or economically. Other approaches based on fuzzy locating promise better return on investment.

Applications in pervasive games

A pervasive game is leveraging the sensed human contexts to adapt game system behaviors. By blending of real and virtual elements and enabling users to physically interact with their surroundings during the play, people can become fully involved in and attain better gaming experience. For example, a pervasive game that uses the contexts of human activity and location in smart homes is reported by an autonomous agent.[32]

Applications in mobile multimedia devices

Museums and archaeological sites sometimes provide multimedia mobile devices as an alternative to the conventional audio guide (see e.g. the Tate Modern in London.[33] A context aware device will use the location, current user interactions and the graph of connected objects to dynamically tailor the information presented to the user.[34] In some cases this is combined with real-time navigation around the site to guide the user to artefacts or exhibits that are likely to be of interest, based on the user's previous interactions.[35]

See also

References

  1. 1.0 1.1 1.2 1.3 Dey, Anind K. (2001). "Understanding and Using Context". Personal and Ubiquitous Computing 5 (1): 4–7. doi:10.1007/s007790170019. 
  2. Rosemann, M., & Recker, J. (2006). "Context-aware process design: Exploring the extrinsic drivers for process flexibility". Luxembourg: Namur University Press. pp. 149–158. http://eprints.qut.edu.au/archive/00004638/01/4638_1.pdf. 
  3. 3.0 3.1 Towards a Better Understanding of Context and Context-Awareness[yes|permanent dead link|dead link}}]
  4. Kaltz, J.W.; Ziegler, J.; Lohmann, S. (2005). "Context-aware Web Engineering: Modeling and Applications". Revue d'Intelligence Artificielle 19 (3): 439–458. doi:10.3166/ria.19.439-458. http://interactivesystems.info/system/pdfs/112/original/14852.pdf. Retrieved 2013-01-15. 
  5. CISCO AAA Overview
  6. B. Schilit; N. Adams; R. Want. (1994). "Context-aware computing applications". pp. 89–101. 
  7. Schilit, B.N.; Theimer, M.M. (1994). "Disseminating Active Map Information to Mobile Hosts". IEEE Network 8 (5): 22–32. doi:10.1109/65.313011. 
  8. Cristiana Bolchini; Carlo A. Curino; Elisa Quintarelli; Fabio A. Schreiber; Letizia Tanca (2007). "A data-oriented survey of context models". SIGMOD Rec. 36 (4): 19–26. doi:10.1145/1361348.1361353. ISSN 0163-5808. http://carlo.curino.us/documents/curino-context2007-survey.pdf. 
  9. "Advanced Interaction in Context". 1999. pp. 89–101. http://www.teco.edu/~albrecht/publication/huc99/advanced_interaction_context.pdf. 
  10. Schmidt, Albrecht (2002). "Ubiquitous Computing - Computing in Context". http://www.comp.lancs.ac.uk/~albrecht/phd/. 
  11. Albrecht Schmidt; Michael Beigl; Hans-W. Gellersen (December 1999). "There is more to Context than Location". Computers & Graphics 23 (6): 893–902. doi:10.1016/s0097-8493(99)00120-x. http://www.teco.uni-karlsruhe.de/~albrecht/publication/draft_docs/context-is-more-than-location.pdf. 
  12. Paolo Bellavista; Antonio Corradi; Mario Fanelli; Luca Foschini (August 2012). "A Survey of Context Data Distribution for Mobile Ubiquitous Systems". ACM Computing Surveys 44 (4): 1–45. doi:10.1145/2333112.2333119. 
  13. Grifoni, Patrizia; D’Ulizia, Arianna; Ferri, Fernando (2018) (in en). Context-Awareness in Location Based Services in the Big Data Era. Lecture Notes on Data Engineering and Communications Technologies. Springer, Cham. pp. 85–127. doi:10.1007/978-3-319-67925-9_5. ISBN 9783319679242. 
  14. Perera, C.; Zaslavsky, A.; Christen, P.; Georgakopoulos, D. (2014). "Context Aware Computing for The Internet of Things: A Survey". IEEE Communications Surveys and Tutorials 16 (1): 414–454. doi:10.1109/SURV.2013.042313.00197. ISSN 1553-877X. 
  15. Perera, C.; Liu, C. H.; Jayawardena, S.; Chen, M. (2014). "A Survey on Internet of Things From Industrial Market Perspective". IEEE Access 2: 1660–1679. doi:10.1109/ACCESS.2015.2389854. ISSN 2169-3536. 
  16. Kortuem, Gerd; Kawsar, Fahim; Sundramoorthy, Vasughi; Fitton, Daniel (January 2010). "Smart Objects As Building Blocks for the Internet of Things". IEEE Internet Computing 14 (1): 44–51. doi:10.1109/MIC.2009.143. ISSN 1089-7801. http://usir.salford.ac.uk/2735/1/w1iot.pdf. 
  17. A Comprehensive Framework for Context-Aware Communication Systems. B. Chihani, E. Bertin, N. Crespi. 15th International Conference on Intelligence in Next Generation Networks (ICIN'11), Berlin, Germany, October 2011
  18. A Self-Organization Mechanism for a Cold Chain Monitoring System. C. Nicolas, M. Marot, M. Becker. 73rd Vehicular Technology Conference 2011 IEEE (VTC Spring), Yokohama, Japan May 2011
  19. Dourish, Paul. "What we talk about when we talk about context." Personal and ubiquitous computing 8.1 (2004): 19-30.
  20. Streitz, Norbert A.; Privat, Gilles (2009). "Ambient Intelligence". Universal Access Handbook. https://www.researchgate.net/publication/230704197. 
  21. Want, R.; Hopper, A.; Falcao, V.; Gibbons, J. (1992). "The Active Badge Location System". ACM Transactions on Information Systems 10 (1): 91–102. doi:10.1145/128756.128759. 
  22. McCarthy, J. F.; Meidel, E. S. (1999). "ActiveMap: A Visualization Tool for Location Awareness to Support Informal Interactions". Handheld and Ubiquitous Computing. Lecture Notes in Computer Science. 1707. pp. 158–170. doi:10.1007/3-540-48157-5_16. ISBN 978-3-540-66550-2. https://archive.org/details/handheldubiquito0000inte/page/158. 
  23. Li, K. A.; Sohn, T. Y.; Huang, S.; Griswold, W. G. (2008). "Peopletones: a system for the detection and notification of buddy proximity on mobile phones.". pp. 160–173. http://www.kevinli.net/peopletones.pdf. 
  24. Eagle, N.; Pentland, A. (2005). "Social Serendipity: Mobilizing Social Software". IEEE Pervasive Computing 4 (2): 28–34. doi:10.1109/MPRV.2005.37. 
  25. Ferscha, A.; Holzmann, C.; Oppl, S. (2004). "Context awareness for group interaction support". pp. 88–97. http://www.academia.edu/download/43935834/GroupInteractionSupport-ferscha2004.pdf. [|permanent dead link|dead link}}]
  26. Tee, K.; Greenberg, S.; Gutwin, C. (2006). "Providing Artifact Awareness to a Distributed Group Through Screen Sharing". pp. 99–108. https://prism.ucalgary.ca/bitstream/handle/1880/45901/2006-828-21.pdf?sequence=2&isAllowed=y. 
  27. Tang, J.; Rua, M. (1994). "Montage: Providing Teleproximity for Distributed Groups". pp. 37–43. 
  28. Ackerman, M.; Hindus, D.; Mainwaring, S.; Starr, B. (1997). "Hanging on the 'Wire: A Field Study of an Audio-Only Media Space". ACM Transactions on Computer-Human Interaction 4 (1): 39–66. doi:10.1145/244754.244756. 
  29. Marmasse, N.; Schmandt, C.; Spectre, D. (2004). "WatchMe: communication and awareness between members of a closely-knit group". pp. 214–231. https://www.media.mit.edu/speech/old/papers/2004/marmasse_UBI04_WatchMe.pdf. 
  30. Rosas, G. M. V. (2003). "ListenIN: Ambient Auditory Awareness at Remote Places". https://dspace.mit.edu/bitstream/handle/1721.1/62959/54698529-MIT.pdf?sequence=2. 
  31. Isaacs, E.; Walendowski, A.; Ranganthan, D. (2002). "Hubbub: A Sound-Enhanced Mobile Instant Messenger that Supports Awareness and Opportunistic Interactions". pp. 333–340. https://www.researchgate.net/publication/221518280. 
  32. B. Guo, R. Fujimura, D. Zhang, M. Imai.Design-in-Play: Improving the Variability of Indoor Pervasive Games. Multimedia Tools and Applications, 2011
  33. "Multimedia guides at Tate Modern". Archived from the original on 8 April 2012. https://web.archive.org/web/20120408200035/http://www.tate.org.uk/visit/tate-modern/things-to-do/multimedia-guides. 
  34. PAST Project - Context Aware Visitor Guiding
  35. AGAMEMNON - Real-time Visitor Guiding

Further reading

External links