Spatial computing

From HandWiki
Short description: Term regarding augmented reality

The term Spatial Computing today refers to advanced human–computer interaction that is perceived by humans as taking place in their world, in and around their bodies, instead of perceptually inside the computer's world, behind the glass of a computer screen. In a sense, this concept inverts the long-standing practice of teaching people to interact with the computers in digital environments, and instead teaches computers to better understand and interact with people more naturally in the human world.

A close synonym of this term is natural user interface (e.g., Kinect), which has the same overall aim. It is closely related to Contextual computing (re: better understanding people's context and intent), Affective computing (re: better understanding people's emotions and states of mind), and Ubiquitous computing (re: computers are invisible and pervasive in everyday life).

Spatial computing typically uses sensors, such as RGB cameras, depth cameras, 3D trackers, inertial measurement unit, or other tools to sense and track nearby human bodies (including hands, arms, eyes, legs, mouths) during ordinary interactions with people and computers in a 3D space. It further uses computer vision (AI / ML) to attempt to understand real world scenes, such as rooms, streets or stores, to read labels, to recognize objects, create 3D maps, and more. Quite often it also uses XR and MR to superimpose virtual 3D graphics and virtual 3D audio onto the human visual and auditory system as a way of providing information more naturally and contextually than traditional 2D screens.

Spatial computing does not technically require any visual output. For example, an advanced pair of headphones, using an inertial measurement unit and other contextual cues could qualify as spatial computing, if the device made contextual audio information available spatially, as if the sounds consistently existed in the space around the headphones' wearer. Smaller internet of things devices, like a robot floor cleaner, would be unlikely to be referred to as a spatial computing device because it lacks the more advanced human-computer interactions described above.

Despite spatial computing often being known to be used as personal computing devices like glasses and headphones, other human-computer interactions that leverage real-time spatial positioning for displays like projection mapping can also be considered spatial computing.

History

The term apparently originated in the field of GIS around 1985[1] or earlier to describe computations on large-scale geospatial information. This is somewhat related to the modern use, but on the scale of continents, cities, and neighborhoods.[2] Modern spatial computing is more centered on the human scale of interaction, around the size of a living room or smaller. But it is not limited to that scale in the aggregate.

In the early 1990s, as field of Virtual reality was beginning to be commercialized beyond academic and military labs, a startup called Worldesign in Seattle used the term Spatial Computing[3] to describe the interaction between individual people and 3D spaces, operating more at the human end of the scale than previous GIS examples may have contemplated. The company built a CAVE-like environment it called the Virtual Environment Theater, whose 3D experience was of a virtual flyover of the Giza Plateau, circa 3000 BC. Robert Jacobson, CEO of Worldesign, attributes the origins of the term to experiments at the Human Interface Technology Lab, at the University of Washington, under the direction of Thomas A. Furness III. Jacobson was a co-founder of that lab before spinning off this early VR startup.

In 1997, an academic publication by T. Caelli, Peng Lam, and H. Bunke called "Spatial Computing: Issues in Vision, Multimedia and Visualization Technologies" introduced the term more broadly for academic audiences.[4]

The specific term "spatial computing" was later referenced again in 2003 by Simon Greenwold,[5] as "human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces". [Yale, June 1995.]

Apple Vision Pro and Magic Leap

Apple announced a spatial computing platform with the Vision Pro on 5 June 2023. It features several features such as Spatial Audio, two micro-OLED displays, the Apple R1 chip and eye tracking. It is planned to be released in February, 2024 in the United States.[6] In announcing its device as a "spatial computer," Apple invoked its history popularizing 2D Graphical User Interfaces that supplanted prior HCI mechanisms (such as command line interfaces). Apple suggests Spatial Computer as a new category of interactive computing device, on the level of importance as the 2D GUI.

Magic Leap had previously used the term Spatial Computing to describe its own devices, starting with the Magic Leap 1. Their use seems consistent with Apple's, although the company did not continue using the term in the long term.

Meta famously used the term Metaverse to describe its future plans. The key difference between that and Spatial Computing is that the latter does not imply the vast inter-networking of sites into a common fabric, akin to a 3D version of the internet. Spatial Computing is something that can be conducted by one person, two people, or a small number of people with their local computer systems without regard to the greater network of other users and spaces. As such, it may serve to minimize (but not completely eliminate) many of the negative aspects found in large-scale on-line social interactions such as Griefing and other harassment by adding social accountability and better social perception in known groups.

References