Engineering:GeForce 256
Top: Logo Bottom: Nvidia GeForce 256 | |
Release date | October 11, 1999 December 13, 1999[1] (DDR) | (SDR)
---|---|
Codename | NV10 |
Architecture | Celsius |
Fabrication process | TSMC 220 nm (CMOS) |
Cards | |
Mid-range | GeForce 256 SDR |
High-end | GeForce 256 DDR |
API support | |
Direct3D | Direct3D 7.0 |
OpenGL | OpenGL 1.2.1 (T&L) |
History | |
Predecessor | RIVA TNT2 |
Successor | GeForce 2 Series |
The GeForce 256 is the original release in Nvidia's "GeForce" product-line. Announced on September 1, 1999 and released on October 11, 1999, the GeForce 256 improves on its predecessor (RIVA TNT2) by increasing the number of fixed pixel pipelines, offloading host geometry calculations to a hardware transform and lighting (T&L) engine, and adding hardware motion compensation for MPEG-2 video. It offered a notably large leap in 3D PC gaming performance and was the first fully Direct3D 7-compliant 3D accelerator.
The chip was manufactured by TSMC using its 220 nm CMOS process.[2] There are two versions of the GeForce 256 – the SDR version released in October 1999 and the DDR version released in mid-December 1999 – each with a different type of SDRAM memory. The SDR version uses SDR SDRAM memory from Samsung Electronics,[3][4] while the later DDR version uses DDR SDRAM memory from Hyundai Electronics (now SK Hynix).[5][6]
Architecture
GeForce 256 was marketed as "the world's first 'GPU', or Graphics Processing Unit", a term Nvidia defined at the time as "a single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines that is capable of processing a minimum of 10 million polygons per second".[7]
The "256" in its name stems from the "256-bit QuadPipe Rendering Engine", a term describing the four 64-bit pixel pipelines of the NV10 chip. In single-textured games NV10 could put out 4 pixels per cycle, while a two-textured scenario would limit this to 2 multitextured pixels per cycle, as the chip still had only one TMU per pipeline, just as TNT2.[8] In terms of rendering features, GeForce 256 also added support for cube environment mapping[8] and dot-product (Dot3) bump mapping.[9]
The integration of the transform and lighting hardware into the GPU itself set the GeForce 256 apart from older 3D accelerators that relied on the CPU to perform these calculations (also known as software transform and lighting). This reduction of 3D graphics solution complexity brought the cost of such hardware to a new low and made it accessible to cheap consumer graphics cards instead of being limited to the previous expensive professionally oriented niche designed for computer-aided design (CAD). NV10's T&L engine also allowed Nvidia to enter the CAD market with dedicated cards for the first time, with a product called Quadro. The Quadro line uses the same silicon chips as the GeForce cards, but has different driver support and certifications tailored to the unique requirements of CAD applications.[10]
Product comparisons
Compared to previous high-end 3D game accelerators, such as 3dfx Voodoo3 3500 and Nvidia RIVA TNT2 Ultra, GeForce provided up to a 50% or greater improvement in frame rate in some games (ones specifically written to take advantage of the hardware T&L) when coupled with a very-low-budget CPU. The later release and widespread adoption of GeForce 2 MX/4 MX cards with the same feature set meant unusually long support for the GeForce 256, until approximately 2006, in games such as Star Wars or Half-Life 2, the latter of which featured a Direct3D 7 path, targeting the fixed-function pipeline of these GPUs.
Without broad application support at the time, critics pointed out that the T&L technology had little real-world value. Initially, it was only somewhat beneficial in certain situations in a few OpenGL-based 3D first-person shooters, most notably Quake III Arena. Benchmarks using low-budget CPUs like the Celeron 300A would give favourable results for the GeForce 256, but benchmarks done with some CPUs such as the Pentium II 300 would give better results with some older graphics cards like the 3dfx Voodoo 2. 3dfx and other competing graphics-card companies pointed out that a fast CPU could more than make up for the lack of a T&L unit. Software support for hardware T&L was not commonplace until several years after the release of the first GeForce. Early drivers were buggy and slow, while 3dfx cards enjoyed efficient, high-speed, mature Glide API and/or MiniGL support for the majority of games. Only after the GeForce 256 was replaced by the GeForce 2, and ATI's T&L-equipped Radeon was also on the market, did hardware T&L become a widely utilized feature in games.
The GeForce 256 was also quite expensive for the time and didn't offer tangible advantages over competitors' products outside of 3D acceleration. For example, its GUI and video playback acceleration were not significantly better than that offered by competition or even older Nvidia products. Additionally, some GeForce cards were plagued with poor analog signal circuitry, which caused display output to be blurry.[citation needed]
As CPUs became faster, the GeForce 256 demonstrated that the disadvantage of hardware T&L is that, if a CPU is fast enough, it can perform T&L functions faster than the GPU, thus making the GPU a hindrance to rendering performance. This changed the way the graphics market functioned, encouraging shorter graphics-card lifetimes and placing less emphasis on the CPU for gaming.
Motion compensation
The GeForce 256 introduced[11] motion compensation as a functional unit of the NV10 chip,[12][13] this first-generation unit would be succeeded by Nvidia's HDVP (High-Definition Video Processor) in GeForce 2 GTS.
Specifications
Discontinued support
NVIDIA has ceased driver support for the GeForce 256 series.
Final drivers include
- Windows 9x & Windows Me: 71.84 released on March 11, 2005; Download;
- Windows 2000 & 32-bit Windows XP: 71.89 released on April 14, 2005; Download.
- The Windows 2000/XP drivers may be installed on later versions of Windows, such as Windows 7. They do not support the "Aero"-effects of Windows 7, however.
Competitors
See also
References
- ↑ IGN staff (December 13, 1999). "News Briefs". http://pc.ign.com/news/13162.html.
- ↑ Singer, Graham (April 3, 2013). "History of the Modern Graphics Processor, Part 2". https://www.techspot.com/article/653-history-of-the-gpu-part-2/.
- ↑ "NVIDIA GeForce 256 SDR". https://videocardz.net/nvidia-geforce-256-sdr/.
- ↑ "K4S161622D Datasheet". Samsung Electronics. http://www.datasheetcatalog.com/datasheets_pdf/K/4/S/1/K4S161622D.shtml.
- ↑ "NVIDIA GeForce 256 DDR". https://videocardz.net/nvidia-geforce-256-ddr-64mb/.
- ↑ "HY5DV651622 Datasheet". Hynix. http://www.ic72.com/pdf_file/h/169210.pdf.
- ↑ "Graphics Processing Unit (GPU)". http://www.nvidia.com/object/gpu.html.
- ↑ 8.0 8.1 Shimpi, Anand Lal. "NVIDIA GeForce 256 Part 1: To buy or not to buy". https://www.anandtech.com/show/391.
- ↑ February 2001, Thomas Pabst 27. "High-Tech And Vertex Juggling – NVIDIA's New GeForce3 GPU". https://www.tomshardware.com/reviews/high,294.html.
- ↑ "Nvidia Workstation Products". Nvidia.com. http://www.nvidia.com/page/workstation.html.
- ↑ "ActiveWin.Com: NVIDIA GeForce 4 Ti 4600 – Review". http://www.activewin.com/reviews/hardware/graphics/nvidia/gf4ti4600/gf3.shtml.
- ↑ "Technology brief". http://www.orpheuscomputing.com/downloads2/GeForce_HDVP_brief.pdf.
- ↑ "History of the Modern Graphics Processor, Part 2". https://www.techspot.com/article/653-history-of-the-gpu-part-2/.
External links
- NVIDIA: GeForce 256 – The World's First GPU from web archive
- ForceWare 71.84 drivers, Final Windows 9x/ME driver release
- ForceWare 71.89 drivers, Final Windows XP driver release
- techPowerUp! GPU Database