# Byte

Short description: Unit of digital information, usually 8 bits
byte
Unit systemunit derived from bit
Unit ofdigital information, data size
SymbolB or o (when 8 bits)

The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer[1][2] and for this reason it is the smallest addressable unit of memory in many computer architectures. To disambiguate arbitrarily sized bytes from the common 8-bit definition, network protocol documents such as The Internet Protocol (RFC 791) refer to an 8-bit byte as an octet.[3] Those bits in an octet are usually counted with numbering from 0 to 7 or 7 to 0 depending on the bit endianness. The first bit is number 0, making the eighth bit number 7.

The size of the byte has historically been hardware-dependent and no definitive standards existed that mandated the size. Sizes from 1 to 48 bits have been used.[4][5][6][7] The six-bit character code was an often-used implementation in early encoding systems, and computers using six-bit and nine-bit bytes were common in the 1960s. These systems often had memory words of 12, 18, 24, 30, 36, 48, or 60 bits, corresponding to 2, 3, 4, 5, 6, 8, or 10 six-bit bytes. In this era, bit groupings in the instruction stream were often referred to as syllables[lower-alpha 1] or slab, before the term byte became common.

The modern de facto standard of eight bits, as documented in ISO/IEC 2382-1:1993, is a convenient power of two permitting the binary-encoded values 0 through 255 for one byte—2 to the power of 8 is 256.[8] The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers commonly optimize for this usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit byte.[9] Modern architectures typically use 32- or 64-bit words, built of four or eight bytes, respectively.

The unit symbol for the byte was designated as the upper-case letter B by the International Electrotechnical Commission (IEC) and Institute of Electrical and Electronics Engineers (IEEE).[10] Internationally, the unit octet, symbol o, explicitly defines a sequence of eight bits, eliminating the potential ambiguity of the term "byte".[11][12]

## Etymology and history

The term byte was coined by Werner Buchholz in June 1956,[4][13][14][lower-alpha 2] during the early design phase for the IBM Stretch[15][16][1][13][14][17][18] computer, which had addressing to the bit and variable field length (VFL) instructions with a byte size encoded in the instruction.[13] It is a deliberate respelling of bite to avoid accidental mutation to bit.[1][13][19][lower-alpha 3]

Another origin of byte for bit groups smaller than a computer's word size, and in particular groups of four bits, is on record by Louis G. Dooley, who claimed he coined the term while working with Jules Schwartz and Dick Beeler on an air defense system called SAGE at MIT Lincoln Laboratory in 1956 or 1957, which was jointly developed by Rand, MIT, and IBM.[20][21] Later on, Schwartz's language JOVIAL actually used the term, but the author recalled vaguely that it was derived from AN/FSQ-31.[22][21]

Early computers used a variety of four-bit binary-coded decimal (BCD) representations and the six-bit codes for printable graphic patterns common in the U.S. Army (FIELDATA) and Navy. These representations included alphanumeric characters and special graphical symbols. These sets were expanded in 1963 to seven bits of coding, called the American Standard Code for Information Interchange (ASCII) as the Federal Information Processing Standard, which replaced the incompatible teleprinter codes in use by different branches of the U.S. government and universities during the 1960s. ASCII included the distinction of upper- and lowercase alphabets and a set of control characters to facilitate the transmission of written language as well as printing device functions, such as page advance and line feed, and the physical or logical control of data flow over the transmission media.[18] During the early 1960s, while also active in ASCII standardization, IBM simultaneously introduced in its product line of System/360 the eight-bit Extended Binary Coded Decimal Interchange Code (EBCDIC), an expansion of their six-bit binary-coded decimal (BCDIC) representations[lower-alpha 4] used in earlier card punches.[23] The prominence of the System/360 led to the ubiquitous adoption of the eight-bit storage size,[18][16][13] while in detail the EBCDIC and ASCII encoding schemes are different.

In the early 1960s, AT&T introduced digital telephony on long-distance trunk lines. These used the eight-bit μ-law encoding. This large investment promised to reduce transmission costs for eight-bit data.

The development of eight-bit microprocessors in the 1970s popularized this storage size. Microprocessors such as the Intel 8008, the direct predecessor of the 8080 and the 8086, used in early personal computers, could also perform a small number of operations on the four-bit pairs in a byte, such as the decimal-add-adjust (DAA) instruction. A four-bit quantity is often called a nibble, also nybble, which is conveniently represented by a single hexadecimal digit.

The term octet is used to unambiguously specify a size of eight bits.[18][12] It is used extensively in protocol definitions.

Historically, the term octad or octade was used to denote eight bits as well at least in Western Europe;[24][25] however, this usage is no longer common. The exact origin of the term is unclear, but it can be found in British, Dutch, and German sources of the 1960s and 1970s, and throughout the documentation of Philips mainframe computers.

## Unit symbol

The unit symbol for the byte is specified in IEC 80000-13, IEEE 1541 and the Metric Interchange Format[10] as the upper-case character B.

In the International System of Quantities (ISQ), B is the symbol of the bel, a unit of logarithmic power ratio named after Alexander Graham Bell, creating a conflict with the IEC specification. However, little danger of confusion exists, because the bel is a rarely used unit. It is used primarily in its decadic fraction, the decibel (dB), for signal strength and sound pressure level measurements, while a unit for one-tenth of a byte, the decibyte, and other fractions, are only used in derived units, such as transmission rates.

The lowercase letter o for octet is defined as the symbol for octet in IEC 80000-13 and is commonly used in languages such as French[26] and Romanian, and is also combined with metric prefixes for multiples, for example ko and Mo.

## Multiple-byte units

v · d · e
Multiples of bytes

Decimal
Value Metric
1000 kB kilobyte
10002 MB megabyte
10003 GB gigabyte
10004 TB terabyte
10005 PB petabyte
10006 EB exabyte
10007 ZB zettabyte
10008 YB yottabyte
Binary
Value IEC JEDEC
1024 KiB kibibyte KB kilobyte
10242 MiB mebibyte MB megabyte
10243 GiB gibibyte GB gigabyte
10244 TiB tebibyte
10245 PiB pebibyte
10246 EiB exbibyte
10247 ZiB zebibyte
10248 YiB yobibyte

More than one system exists to define larger units based on the byte. Some systems are based on powers of 10; other systems are based on powers of 2. Nomenclature for these systems has been the subject of confusion. Systems based on powers of 10 reliably use standard SI prefixes (kilo, mega, giga, ...) and their corresponding symbols (k, M, G, ...). Systems based on powers of 2, however, might use binary prefixes (kibi, mebi, gibi, ...) and their corresponding symbols (Ki, Mi, Gi, ...) or they might use the prefixes K, M, and G, creating ambiguity.

While the numerical difference between the decimal and binary interpretations is relatively small for the kilobyte (about 2% smaller than the kibibyte), the systems deviate increasingly as units grow larger (the relative deviation grows by 2.4% for each three orders of magnitude). For example, a power-of-10-based yottabyte is about 17% smaller than power-of-2-based yobibyte.

### Units based on powers of 10

Definition of prefixes using powers of 10—in which 1 kilobyte (symbol kB) is defined to equal 1,000 bytes—is recommended by the International Electrotechnical Commission (IEC).[27] The IEC standard defines eight such multiples, up to 1 yottabyte (YB), equal to 10008 bytes.[28]

This definition is most commonly used for data-rate units in computer networks, internal bus, hard drive and flash media transfer speeds, and for the capacities of most storage media, particularly hard drives,[29] flash-based storage,[30] and DVDs. Operating systems that use this definition include macOS,[31] iOS,[31] Ubuntu,[32] and Debian.[33] It is also consistent with the other uses of the SI prefixes in computing, such as CPU clock speeds or measures of performance.

### Units based on powers of 2

A system of units based on powers of 2 in which 1 kibibyte (KiB) is equal to 1,024 (i.e., 210) bytes is defined by international standard IEC 80000-13 and is supported by national and international standards bodies (BIPM, IEC, NIST). The IEC standard defines eight such multiples, up to 1 yobibyte (YiB), equal to 10248 bytes.

An alternate system of nomenclature for the same units (referred to here as the customary convention), in which 1 kilobyte (KB) is equal to 1,024 bytes,[34][35][36] 1 megabyte (MB) is equal to 10242 bytes and 1 gigabyte (GB) is equal to 10243 bytes is mentioned by a 1990s JEDEC standard. Only the first three multiples (up to GB) are mentioned by the JEDEC standard, which makes no mention of TB and larger. The customary convention is used by the Microsoft Windows operating system[37] and random-access memory capacity, such as main memory and CPU cache size, and in marketing and billing by telecommunication companies, such as Vodafone,[38] AT&T,[39] Orange[40] and Telstra.[41]

This definition was used by Apple Inc. operating systems prior to Mac OS X Snow Leopard and iOS 10 before switching to units based on powers of 10.[31]

### Parochial units

Various computer vendors have coined terms for data of various sizes, sometimes with different sizes for the same term even within a single vendor. These terms include double word, half word, long word, quad word, slab, superword and syllable. There are also informal terms. e.g., half byte and nybble for 4 bits, octal K for Template:Base.

### History of the conflicting definitions

Percentage difference between decimal and binary interpretations of the unit prefixes grows with increasing storage size

Contemporary[lower-alpha 5] computer memory has a binary architecture making a definition of memory units based on powers of 2 most practical. The use of the metric prefix kilo for binary multiples arose as a convenience, because 1,024 is approximately 1,000.[42] This definition was popular in early decades of personal computing, with products like the Tandon 5​14-inch DD floppy format (holding 368,640 bytes) being advertised as "360 KB", following the 1,024-byte convention. It was not universal, however. The Shugart SA-400 5​14-inch floppy disk held 109,375 bytes unformatted,[43] and was advertised as "110 Kbyte", using the 1000 convention.[44] Likewise, the 8-inch DEC RX01 floppy (1975) held 256,256 bytes formatted, and was advertised as "256k".[45] Other disks were advertised using a mixture of the two definitions: notably, ​3 12-inch HD disks advertised as "1.44 MB" in fact have a capacity of 1,440 KiB, the equivalent of 1.47 MB or 1.41 MiB.

In 1995, the International Union of Pure and Applied Chemistry's (IUPAC) Interdivisional Committee on Nomenclature and Symbols attempted to resolve this ambiguity by proposing a set of binary prefixes for the powers of 1024, including kibi (kilobinary), mebi (megabinary), and gibi (gigabinary).[46][47]

In December 1998, the IEC addressed such multiple usages and definitions by adopting the IUPAC's proposed prefixes (kibi, mebi, gibi, etc.) to unambiguously denote powers of 1024.[48] Thus one kibibyte (1 KiB) is 10241  bytes = 1024 bytes, one mebibyte (1 MiB) is 10242  bytes = 1,048,576 bytes, and so on.

In 1999, Donald Knuth suggested calling the kibibyte a "large kilobyte" (KKB).[49]

### Modern standard definitions

The IEC adopted the IUPAC proposal and published the standard in January 1999.[50][51] The IEC prefixes are now part of the International System of Quantities. The IEC further specified that the kilobyte should only be used to refer to 1,000 bytes.

### Lawsuits over definition

Lawsuits arising from alleged consumer confusion over the binary and decimal definitions of multiples of the byte have generally ended in favor of the manufacturers, with courts holding that the legal definition of gigabyte or GB is 1 GB = 1,000,000,000 (109) bytes (the decimal definition), rather than the binary definition (230). Specifically, the United States District Court for the Northern District of California held that "the U.S. Congress has deemed the decimal definition of gigabyte to be the 'preferred' one for the purposes of 'U.S. trade and commerce' [...] The California Legislature has likewise adopted the decimal system for all 'transactions in this state.'"[52]

Earlier lawsuits had ended in settlement with no court ruling on the question, such as a lawsuit against drive manufacturer Western Digital.[53][54] Western Digital settled the challenge and added explicit disclaimers to products that the usable capacity may differ from the advertised capacity.[53] Seagate was sued on similar grounds and also settled.[53][55]

### Practical examples

Unit Approximate equivalent
byte a basic Latin character.
kilobyte text of "Jabberwocky"
a typical favicon
megabyte text of Harry Potter and the Goblet of Fire[56]
gigabyte about half an hour of video[57]
CD-quality audio of Mellon Collie and the Infinite Sadness
terabyte the largest consumer hard drive in 2007[58]
1080p 4:3 video of Avatar: The Last Airbender animated television series, all 61 episodes[lower-alpha 6]
petabyte 2000 years of MP3-encoded music[59]
exabyte global monthly Internet traffic in 2004[60]
zettabyte global yearly Internet traffic in 2016[61]

## Common uses

Many programming languages define the data type byte.

The C and C++ programming languages define byte as an "addressable unit of data storage large enough to hold any member of the basic character set of the execution environment" (clause 3.6 of the C standard). The C standard requires that the integral data type unsigned char must hold at least 256 different values, and is represented by at least eight bits (clause 5.2.4.2.1). Various implementations of C and C++ reserve 8, 9, 16, 32, or 36 bits for the storage of a byte.[62][63][lower-alpha 7] In addition, the C and C++ standards require that there are no gaps between two bytes. This means every bit in memory is part of a byte.[64]

Java's primitive data type byte is defined as eight bits. It is a signed data type, holding values from −128 to 127.

.NET programming languages, such as C#, define byte as an unsigned type, and the sbyte as a signed data type, holding values from 0 to 255, and −128 to 127, respectively.

In data transmission systems, the byte is used as a contiguous sequence of bits in a serial data stream, representing the smallest distinguished unit of data. A transmission unit might additionally include start bits, stop bits, and parity bits, and thus its size may vary from seven to twelve bits to contain a single seven-bit ASCII code.[65]

## Notes

1. The term syllable was used for bytes containing instructions or constituents of instructions, not for data bytes.
2. Many sources erroneously indicate a birthday of the term byte in July 1956, but Werner Buchholz claimed that the term would have been coined in June 1956. In fact, the earliest document supporting this dates from 1956-06-11. Buchholz stated that the transition to 8-bit bytes was conceived in August 1956, but the earliest document found using this notion dates from September 1956.
3. Some later machines, e.g., Burroughs B1700, CDC 3600, DEC PDP-6, DEC PDP-10 had the ability to operate on arbitrary bytes no larger than the word size.
4. There was more than one BCD code page.
5. Through the 1970s there were machines with decimal architectures.
6. Video is encoded at a bitrate of 27.80 Mbit/s, with a runtime of 1,403 min[66] (84180 seconds) resulting in an approximate size of 0.2925 terabytes
7. The actual number of bits in a particular implementation is documented as CHAR_BIT as implemented in the file limits.h.

## References

1. Blaauw, Gerrit Anne; Brooks, Jr., Frederick Phillips; Buchholz, Werner (1962), "4: Natural Data Units", in Buchholz, Werner, Planning a Computer System – Project Stretch, McGraw-Hill Book Company, Inc. / The Maple Press Company, York, PA., pp. 39–40, retrieved 2017-04-03, "Terms used here to describe the structure imposed by the machine design, in addition to bit, are listed below.
Byte denotes a group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units. A term other than character is used here because a given character may be represented in different applications by more than one code, and different codes may use different numbers of bits (i.e., different byte sizes). In input-output transmission the grouping of bits may be completely arbitrary and have no relation to actual characters. (The term is coined from bite, but respelled to avoid accidental mutation to bit.)
A word consists of the number of data bits transmitted in parallel from or to memory in one memory cycle. Word size is thus defined as a structural property of the memory. (The term catena was coined for this purpose by the designers of the Bull Bull Gamma 60|GAMMA 60 (fr) computer.)
Block refers to the number of words transmitted to or from an input-output unit in response to a single input-output instruction. Block size is a structural property of an input-output unit; it may have been fixed by the design or left to be varied by the program."

2. Bemer, Robert William (1959), "A proposal for a generalized card code of 256 characters", Communications of the ACM 2 (9): 19–23, doi:10.1145/368424.368435
3. Postel, J. (September 1981) (in en), Internet Protocol DARPA INTERNET PROGRAM PROTOCOL SPECIFICATION, p. 43, doi:10.17487/RFC0791, RFC 791, retrieved 28 August 2020, "octet An eight bit byte."
4. Buchholz, Werner (1956-06-11). "7. The Shift Matrix". The Link System. IBM. pp. 5–6. Stretch Memo No. 39G. Retrieved 2016-04-04. "[…] Most important, from the point of view of editing, will be the ability to handle any characters or digits, from 1 to 6 bits long.
Figure 2 shows the Shift Matrix to be used to convert a 60-bit word, coming from Memory in parallel, into characters, or 'bytes' as we have called them, to be sent to the Adder serially. The 60 bits are dumped into magnetic cores on six different levels. Thus, if a 1 comes out of position 9, it appears in all six cores underneath. Pulsing any diagonal line will send the six bits stored along that line to the Adder. The Adder may accept all or only some of the bits.
Assume that it is desired to operate on 4 bit decimal digits, starting at the right. The 0-diagonal is pulsed first, sending out the six bits 0 to 5, of which the Adder accepts only the first four (0–3). Bits 4 and 5 are ignored. Next, the 4 diagonal is pulsed. This sends out bits 4 to 9, of which the last two are again ignored, and so on.
It is just as easy to use all six bits in alphanumeric work, or to handle bytes of only one bit for logical analysis, or to offset the bytes by any number of bits. All this can be done by pulling the appropriate shift diagonals. An analogous matrix arrangement is used to change from serial to parallel operation at the output of the adder. […]"

5. 3600 Computer System – Reference Manual. St. Paul, Minnesota, USA: Control Data Corporation (CDC). 1966-10-11. 60021300. Retrieved 2017-04-05. "Byte – A partition of a computer word."  (NB. Discusses 12-bit, 24-bit and 48-bit bytes.)
6. Rao, Thammavaram R. N.; Fujiwara, Eiji (1989). McCluskey, Edward J.. ed. Error-Control Coding for Computer Systems. Prentice Hall Series in Computer Engineering (1 ed.). Englewood Cliffs, NJ, USA: Prentice Hall. ISBN 0-13-283953-9.  (NB. Example of the usage of a code for "4-bit bytes".)
7. Tafel, Hans Jörg (1971) (in de). Einführung in die digitale Datenverarbeitung. Munich: Carl Hanser Verlag. p. 300. ISBN 3-446-10569-7. "Byte = zusammengehörige Folge von i.a. neun Bits; davon sind acht Datenbits, das neunte ein Prüfbit"  (NB. Defines a byte as a group of typically 9 bits; 8 data bits plus 1 parity bit.)
8. ISO/IEC 2382-1: 1993, Information technology – Vocabulary – Part 1: Fundamental terms. 1993. "byte
A string that consists of a number of bits, treated as a unit, and usually representing a character or a part of a character.
NOTES
1 The number of bits in a byte is fixed for a given data processing system.
2 The number of bits in a byte is usually 8."

9. Jaffer, Aubrey (2011). "Metric-Interchange-Format".
10. ISO 2382-4, Organization of data (2 ed.). "byte, octet, 8-bit byte: A string that consists of eight bits."
11. Buchholz, Werner (February 1977). "The Word 'Byte' Comes of Age...". Byte Magazine 2 (2): 144. "[…] The first reference found in the files was contained in an internal memo written in June 1956 during the early days of developing Stretch. A byte was described as consisting of any number of parallel bits from one to six. Thus a byte was assumed to have a length appropriate for the occasion. Its first use was in the context of the input–output equipment of the 1950s, which handled six bits at a time. The possibility of going to 8-bit bytes was considered in August 1956 and incorporated in the design of Stretch shortly thereafter. The first published reference to the term occurred in 1959 in a paper 'Processing Data in Bits and Pieces' by G A Blaauw, F P Brooks Jr and W Buchholz in the IRE Transactions on Electronic Computers, June 1959, page 121. The notions of that paper were elaborated in Chapter 4 of Planning a Computer System (Project Stretch), edited by W Buchholz, McGraw-Hill Book Company (1962). The rationale for coining the term was explained there on page 40 as follows:
Byte denotes a group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units. A term other than character is used here because a given character may be represented in different applications by more than one code, and different codes may use different numbers of bits (ie, different byte sizes). In input-output transmission the grouping of bits may be completely arbitrary and have no relation to actual characters. (The term is coined from bite, but respelled to avoid accidental mutation to bit.)
System/360 took over many of the Stretch concepts, including the basic byte and word sizes, which are powers of 2. For economy, however, the byte size was fixed at the 8 bit maximum, and addressing at the bit level was replaced by byte addressing. [...]".

12. "Timeline of the IBM Stretch/Harvest era (1956–1961)". Computer History Museum. June 1956. "1956 Summer: Gerrit Blaauw, Fred Brooks, Werner Buchholz, John Cocke and Jim Pomerene join the Stretch team. Lloyd Hunter provides transistor leadership.
1956 July [sic]: In a report Werner Buchholz lists the advantages of a 64-bit word length for Stretch. It also supports NSA's requirement for 8-bit bytes. Werner's term "Byte" first popularized in this memo."
(NB. This timeline erroneously specifies the birth date of the term "byte" as July 1956, while Buchholz actually used the term as early as June 1956.)
13. Buchholz, Werner (1956-07-31). "5. Input-Output". Memory Word Length. IBM. p. 2. Stretch Memo No. 40. Retrieved 2016-04-04. "[…] 60 is a multiple of 1, 2, 3, 4, 5, and 6. Hence bytes of length from 1 to 6 bits can be packed efficiently into a 60-bit word without having to split a byte between one word and the next. If longer bytes were needed, 60 bits would, of course, no longer be ideal. With present applications, 1, 4, and 6 bits are the really important cases.
With 64-bit words, it would often be necessary to make some compromises, such as leaving 4 bits unused in a word when dealing with 6-bit bytes at the input and output. However, the LINK Computer can be equipped to edit out these gaps and to permit handling of bytes which are split between words. […]"

14. Buchholz, Werner (1956-09-19). "2. Input-Output Byte Size". Memory Word Length and Indexing. IBM. p. 1. Stretch Memo No. 45. Retrieved 2016-04-04. "[…] The maximum input-output byte size for serial operation will now be 8 bits, not counting any error detection and correction bits. Thus, the Exchange will operate on an 8-bit byte basis, and any input-output units with less than 8 bits per byte will leave the remaining bits blank. The resultant gaps can be edited out later by programming […]"
15. Raymond, Eric Steven (2017). "byte definition".
16. Bemer, Robert William (2000-08-08). "Why is a byte 8 bits? Or is it?". Computer History Vignettes. "[…] I came to work for IBM, and saw all the confusion caused by the 64-character limitation. Especially when we started to think about word processing, which would require both upper and lower case. [...] I even made a proposal (in view of STRETCH, the very first computer I know of with an 8-bit byte) that would extend the number of punch card character codes to 256 […]. So some folks started thinking about 7-bit characters, but this was ridiculous. With IBM's STRETCH computer as background, handling 64-character words divisible into groups of 8 (I designed the character set for it, under the guidance of Dr. Werner Buchholz, the man who DID coin the term 'byte' for an 8-bit grouping). […] It seemed reasonable to make a universal 8-bit character set, handling up to 256. In those days my mantra was 'powers of 2 are magic'. And so the group I headed developed and justified such a proposal […] The IBM 360 used 8-bit characters, although not ASCII directly. Thus Buchholz's 'byte' caught on everywhere. I myself did not like the name for many reasons. The design had 8 bits moving around in parallel. But then came a new IBM part, with 9 bits for self-checking, both inside the CPU and in the tape drives. I exposed this 9-bit byte to the press in 1973. But long before that, when I headed software operations for Cie. Bull in France in 1965–66, I insisted that 'byte' be deprecated in favor of 'octet'. […] It is justified by new communications methods that can carry 16, 32, 64, and even 128 bits in parallel. But some foolish people now refer to a '16-bit byte' because of this parallel transfer, which is visible in the UNICODE set. I'm not sure, but maybe this should be called a 'hextet'. […]"
17. Blaauw, Gerrit Anne; Brooks, Jr., Frederick Phillips; Buchholz, Werner (June 1959). "Processing Data in Bits and Pieces". IRE Transactions on Electronic Computers: 121.
18. Dooley, Louis G. (February 1995). "Byte: The Word". BYTE (Ocala, FL, USA). "[…] The word byte was coined around 1956 to 1957 at MIT Lincoln Laboratories within a project called SAGE (the North American Air Defense System), which was jointly developed by Rand, Lincoln Labs, and IBM. In that era, computer memory structure was already defined in terms of word size. A word consisted of x number of bits; a bit represented a binary notational position in a word. Operations typically operated on all the bits in the full word.
We coined the word byte to refer to a logical set of bits less than a full word size. At that time, it was not defined specifically as x bits but typically referred to as a set of 4 bits, as that was the size of most of our coded data items. Shortly afterward, I went on to other responsibilities that removed me from SAGE. After having spent many years in Asia, I returned to the U.S. and was bemused to find out that the word byte was being used in the new microcomputer technology to refer to the basic addressable memory unit.".
(NB. According to his son, Dooley wrote to him: "On good days, we would have the XD-1 up and running and all the programs doing the right thing, and we then had some time to just sit and talk idly, as we waited for the computer to finish doing its thing. On one such occasion, I coined the word "byte", they (Jules Schwartz and Dick Beeler) liked it, and we began using it amongst ourselves. The origin of the word was a need for referencing only a part of the word length of the computer, but a part larger than just one bit...Many programs had to access just a specific 4-bit segment of the full word...I wanted a name for this smaller segment of the fuller word. The word "bit" lead to "bite" (meaningfully less than the whole), but for a unique spelling, "i" could be "y", and thus the word "byte" was born.")
19. Ram, Stefan (17 January 2003). "Erklärung des Wortes "Byte" im Rahmen der Lehre binärer Codes" (in de). Berlin, Germany: Freie Universität Berlin.
20. Origin of the term "byte", 1956, retrieved 2017-04-10, "A question-and-answer session at an ACM conference on the history of programming languages included this exchange:
JOHN GOODENOUGH: You mentioned that the term "byte" is used in JOVIAL. Where did the term come from?
JULES SCHWARTZ (inventor of JOVIAL): As I recall, the AN/FSQ-31, a totally different computer than the 709, was byte oriented. I don't recall for sure, but I'm reasonably certain the description of that computer included the word "byte," and we used it.
FRED BROOKS: May I speak to that? Werner Buchholz coined the word as part of the definition of STRETCH, and the AN/FSQ-31 picked it up from STRETCH, but Werner is very definitely the author of that word.
SCHWARTZ: That's right. Thank you."

21. Williams, R. H. (1969-01-01). British Commercial Computer Digest: Pergamon Computer Data Series. Pergamon Press. 978-1483122106. ISBN 1483122107.
22. "When is a kilobyte a kibibyte? And an MB an MiB?". The International System of Units and the IEC. International Electrotechnical Commission. )
23. Prefixes for Binary Multiples — The NIST Reference on Constants, Units, and Uncertainty
24. Matsuoka, Satoshi; Sato, Hitoshi; Tatebe, Osamu; Koibuchi, Michihiro; Fujiwara, Ikki; Suzuki, Shuji; Kakuta, Masanori; Ishida, Takashi et al. (2014-09-15). "Extreme Big Data (EBD): Next Generation Big Data Infrastructure Technologies Towards Yottabyte/Year" (in en). Supercomputing Frontiers and Innovations 1 (2): 89–107. doi:10.14529/jsfi140206. ISSN 2313-8734.
25. 1977 Disk/Trend Report Rigid Disk Drives, published June 1977
26. SanDisk USB Flash Drive "Note: 1 megabyte (MB) = 1 million bytes; 1 gigabyte (GB) = 1 billion bytes."
27. "How iOS and macOS report storage capacity" (in en). 27 February 2018.
28. Kilobyte – Definition and More from the Free Merriam-Webster Dictionary . Merriam-webster.com (2010-08-13). Retrieved on 2011-01-07.
29. Kilobyte – Definition of Kilobyte at Dictionary.com . Dictionary.reference.com (1995-09-29). Retrieved on 2011-01-07.
30. Definition of kilobyte from Oxford Dictionaries Online . Askoxford.com. Retrieved on 2011-01-07.
31. "Determining Actual Disk Size: Why 1.44 MB Should Be 1.40 MB". Support.microsoft.com. 2003-05-06.
32. "3G/GPRS data rates". Vodafone Ireland.
33. "Internet Mobile Access". Orange Romania.
34. "Prefixes for binary multiples". International Electrotechnical Commission.
35. "SA400 minifloppy". Swtpc.com. 2013-08-14.
36. IUCr 1995 Report - IUPAC Interdivisional Committee on Nomenclature and Symbols (IDCNS) http://ww1.iucr.org/iucr-top/cexec/rep95/idcns.htm
37. "Binary Prefix" University of Auckland Department of Computer Science https://wiki.cs.auckland.ac.nz/stageonewiki/index.php/Binary_prefix
38.   "In December 1998 the International Electrotechnical Commission (IEC) [...] approved as an IEC International Standard names and symbols for prefixes for binary multiples for use in the fields of data processing and data transmission."
39. NIST "Prefixes for binary multiples" https://physics.nist.gov/cuu/Units/binary.html
40. Amendment 2 to IEC International Standard IEC 60027-2: Letter symbols to be used in electrical technology – Part 2: Telecommunications and electronics.
41. "Order Granting Motion to Dismiss". United States District Court for the Northern District of California.
42. Mook, Nate (2006-06-28). "Western Digital Settles Capacity Suit". betanews.
43. Baskin, Scott D. (2006-02-01). "Defendant Western Digital Corporation's Brief in Support of Plaintiff's Motion for Preliminary Approval". Orin Safier v. Western Digital Corporation. Western Digital Corporation.
44. Judge, Peter (2007-10-26). "Seagate pays out over gigabyte definition". ZDNet.
45. Allison Dexter, "How Many Words are in Harry Potter?", [1]; shows 190,637 words
46. Kilobytes Megabytes Gigabytes Terabytes (Stanford University)
47. Perenson, Melissa J. (4 January 2007). "Hitachi Introduces 1-Terabyte Hard Drive".
48. Gross, Grant (24 November 2007). "Internet Could Max Out in 2 Years, Study Says".
49. Klein, Jack (2008), Integer Types in C and C++, retrieved 2015-06-18