ACCESS.bus

From HandWiki
Revision as of 14:42, 6 February 2024 by Smart bot editor (talk | contribs) (update)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Computer peripheral bus; predecessor to USB

ACCESS.bus, or A.b for short, is a peripheral-interconnect computer bus developed by Philips and DEC in the early 1990s, based on Philips' I²C system.[1][2] It is similar in purpose to USB, in that it allows low-speed devices to be added or removed from a computer on the fly. While it was made available earlier than USB, it never became popular as USB gained in popularity.[3]

History

Apple Computer's Apple Desktop Bus (ADB), introduced in the mid-1980s, allowed all sorts of low-speed devices like mice and keyboards to be daisy-chained into a single port on the computer, greatly reducing the number of ports needed, as well as the resulting cable clutter. ADB was universal on the Macintosh line by the late 1980s, and offered a clear advantage over the profusion of standards being used on PCs.[4]

A.b was an attempt to reproduce these qualities in a new standard for the PC and workstation market. It had two additional advantages over ADB; hot plugging (plug-n-play) and the ability for the devices to have their own host controllers so devices could be plugged together without the need for a host computer to control the communications. Philips also suggested that the ability to plug any A.b device into any computer meant that people with special devices, like mice designed for people with disabilities, might carry their device from machine to machine.[4]

An industry group, the ACCESS.bus Industry Group, or ABIG, was created in 1993 to control the development of the standard. There were 29 voting members of the group, including Microsoft. By this point DEC had introduced A.b on some of their workstations and a number of peripherals had been introduced by a variety of companies.[4]

Development of USB began the next year, in 1994, and the consortium included a number of the members of the A.b group, notably DEC and Microsoft. Interest in A.b waned, leaving Philips as the primary supporter.[5] A.b had a number of technical advantages over USB, which would not re-appear on that system until years later, and it was also easier and less expensive to implement. However, it was also much slower than USB, ten to a hundred times. USB fit neatly into the performance niche between A.b and FireWire, which made it practical to design a system with USB alone. Intel's backing was another deciding factor; the company began including USB controllers in the standard motherboard control chips, making the cost of implementation roughly that of the connector.

The only widespread use of the A.b system was DDC2Ab interface by the VESA group. They needed a standardized bus for communicating device abilities and status between monitors and computers, and selected I²C because it required only two pins; by re-using existing reserved pins in the standard VGA cable they could implement a complete A.b bus (including power). The bus could then be offered as an external expansion port simply by adding a socket on the monitor case. A number of monitors with A.b connectors started appearing in the mid-1990s, notably those by NEC, but this was at about the same time USB was being heavily promoted and few devices were available to plug into them, mostly mice and keyboards. The bus remained the standard way for a monitor to communicate setup information to the host graphics card.

Technical standard

A.b is a physical layer definition that describes the physical cabling and connectors used in the network. The higher layers, namely the signaling and protocol issues, are already defined to be the same as Philips' Inter-Integrated Circuit (I²C) bus.[6][7] Compared to I²C, A.b:

  • adds two additional pins to provide power to the devices (+5 V and GND)
  • allows for only 125 devices out of I²C's 1024
  • supports only the 100 kbit/s "standard mode" and 10 kbit/s "low-speed mode"

The idea was to define a single standard that could be used both inside and outside a computer. A single I²C/A.b controller chip would be used inside the machine, connected on the motherboard to internal devices like the clock and battery power monitor. An A.b connector on the outside would then allow additional devices to be plugged into the bus. This way all of the low- and medium-speed devices on the machine would be driven by a single controller and protocol stack.[6]

A.b also defined a small set of standardized device classes. These included monitors, keyboards, "locators" (pointing devices like mice and joysticks), battery monitors, and "text devices" (modems, etc.). Depending on how much intelligence the device needed, the interface in the device could leave almost all of the work to the driver. This allows A.b to scale down to price points low enough for devices like mice.[6]

Compared to USB, A.b had several advantages. Any device on the bus could be a master or a slave, and a protocol is defined for selecting which one a device should use under any particular circumstance. This allows devices to be plugged together with A.b without a host computer. For instance, a digital camera could be plugged directly into a printer and become the master. Under (standard) USB the computer is always the master and the devices are always slaves.

In order to support the same sort of device-to-device connection, USB requires additional support in dual-role devices to emulate a host and provide similar functionality. This was only standardized years later as part of USB On-The-Go system. Another advantage of A.b is that devices can be strung together into a single daisy-chain—A.b can support, but does not require, the use of hubs. This can reduce cable-clutter significantly.[6]

References

External links

Official