Sign up for a new account or log in here:
AMD has long been subject of polarizing debate among technology enthusiasts. The chapters of its history provide ample ammunition for countless discussions and no small measure of rancour. Considering that it was once considered an equal to Intel, many wonder why AMD is failing today. However, it's probably fairer to ask how the company has survived so for long -- a question we intend to explore as we revisit the company's past, examine its present and gaze into its future.
Founded in May 1969 by seven Fairchild Semiconductor employees headed by Jerry Sanders, Fairchild's director of marketing, you could say AMD established itself as an underdog from the get-go by focusing its early efforts on redesigning parts from Fairchild and National Semiconductor instead of creating new products as Intel did with the iconic 4004. Though it came close during the early 2000s, as we'll discuss shortly, the company has largely struggled to shake the image of being Intel's shadow.
Back to 1969, a few months after its creation, AMD moved from Santa Clara, California -- Intel's hometown -- to Sunnyvale, bringing with it redesigned integrated circuits (ICs) that touted increased efficiency, stress tolerances and speed. AMD designed its chips to meet US military specifications, which proved a considerable advantage in the nascent computer industry where quality control varied extremely. Design and production of logic ICs continued to grow steadily.
By 1975, AMD grew to be a sizeable company. That year saw the introduction of the Am2900 IC family, which included multiplexers, ALUs, system clock generators and memory controllers -- individual IC blocks that are now found in modern CPUs, but were separate integrated circuits at the time. AMD also began reverse engineering Intel's 8080 processor. Originally called the AM9080, it was renamed 8080A after AMD and Intel signed a cross-licensing agreement in 1976. Cost for AMD: $325,000 ($1.3 million in today's dollars).
AMD took over five years to reverse-engineer the 80386 into the Am386, but once completed it again proved to be more than a match for Intel's design.
The 8085 (3MHz) processor followed in 1977 and was soon joined by the 8086 (8MHz) as well as the 8088 (5-10MHz) in 1979, a year that also saw production begin at AMD's Austin, Texas facility. Early 1982 ushered in a new phase of the company. When IBM began moving from mainframes into PCs, the outfit decided to outsource parts rather than develop in-house. Intel's 8086 processor was chosen with the proviso that AMD acted as a second source to guarantee a constant supply for IBM's PC/AT.
A contract was signed in February 1982 between Intel and AMD, with the latter producing 8086, 8088, 80186 and 80188 processors, not just for IBM, but for the many IBM clones that proliferated -- notably Compaq. AMD also started producing the Intel 80286 as the Am286 near the end of the year. This was to become the first truly significant desktop PC processor, and while Intel's models generally ranged from 6-10MHz, AMD's started at 8MHz and went as high as 16-20MHz -- a blow against Intel.
This period represented a huge growth of the fledgling PC market. Noting that AMD had offered the Am286 with a significant speed boost over the 80286, Intel attempted to stop AMD in its tracks by excluding them from the next generation 386 processors. Arbitration took four and a half years to complete, and while the judgment found that Intel was not obligated to transfer every new product to AMD, it was determined that the larger chipmaker had breached an implied covenant of good faith.
Intel denied AMD access to the 386 license during a critical period when IBM PC's market share grew from 55% to 84%. Left without access to Intel's specification, AMD took over five years to reverse-engineer the 80386 into the Am386, but once completed it again proved to be more than a match for Intel's design. Where the Intel 386 reached 33MHz, the Am386DX hit 40MHz, closing in 486's performance. This was probably the first instance of AMD notoriously offering a better performance/price ratio.
The Am386's success was followed by the release of 1993's highly competitive 40MHz Am486, which offered roughly 20% more performance than Intel's 33MHz i486 for the same price. This was to be replicated through the entire 486 line up, and while Intel's 486DX topped out at 100MHz, predictably at this stage, AMD offered a snappier 120MHz option. To better illustrate AMD's good fortune in this period, the company's revenue doubled from just over $1 billion in 1990 to well over $2 billion in 1994.
In 1995, AMD introduced the Am5x86 processor as a successor to the 486, offering it as a direct upgrade for older computers. The Am5x86 P75+ boasted a 150Mhz frequency, with the "P75" referencing performance that was similar to Intel's Pentium 75. The "+" signified that the AMD chip was slightly faster at integer math than Intel's solution. Intel had switched naming conventions to distance itself from products by AMD and other vendors. The Am5x86 was an great revenue earner for AMD, both from new sales and for upgrades from 486 machines. As with the Am286, 386 and 486, AMD continued to extend the lifespan of the parts by offering them as embedded solutions.
March 1996 saw the introduction of its first in-house processor, the 5k86, later renamed K5. The chip was designed to compete with the Intel Pentium and Cyrix 6x86 series. Executing well with the K5 was a pivotal point in AMD's history since the chip had a much more powerful floating point unit than Cyrix's and about equal to the Pentium 100, while the integer performance equaled the Pentium 200. Unfortunately, the project was dogged with design and manufacturing issues that resulted in the CPU not meeting its frequency goals, arriving late to market and suffering poor sales. Opportunity missed.
AMD's rise mirrored Intel's decline from the early beginnings of the K6 architecture, which was pitted against Intel's Pentium, Pentium II and (largely rebadged) Pentium III.
By this time, AMD had spent $857 million in stock on NexGen, a small fabless chip company whose processors were made by IBM. AMD's K5 and the developmental K6 had scaling issues at higher clock speeds (~150MHz and above) while NexGen's Nx686 had already demonstrated a 180MHz core speed. After the buyout, the Nx686 became AMD’s K6 and the developmental (original) AMD "K6" was consigned to the scrapyard.
AMD's rise mirrored Intel's decline from the early beginnings of the K6 architecture, which was pitted against Intel's Pentium, Pentium II and (largely rebadged) Pentium III. The K6 produced a quickening of AMD's success. The CPU owed its existence to an ex-Intel employee, Vinod Dham (a.k.a. the "Father of Pentium"), who left Intel in 1995 to work at NexGen. Dham was instrumental in creating what would become the K6.
The arrival of AMD's K7 (usually known by its model name of Athlon) in 1999 represents the pinnacle of the company's golden age.
When the K6 hit shelves in 1997, it represented a viable alternative to the Pentium MMX, and while Intel continued to stumble along with its underwhelming Netburst architecture, the K6 went from strength to strength -- from a 233Mhz speed in the initial stepping, to 300MHz for the "Little Foot" revision in January 1998, 350MHz in the "Chomper" K6-2 of May 1998 and 550MHz in September 1998 with the "Chomper Extended" revision. K6-2 introduced AMD's 3DNow! SIMD Instruction set (similar to Intel's SSE), though this came with the downside that programmers needed to incorporate the new instruction in addition to patches and compilers needing to be rewritten to utilize the feature.
Like the initial K6, the K6-II represented outstanding value, often costing half as much as Intel's Pentium chips. The final iteration of the K6, the K6-III, was a more complicated CPU, the transistor count now stood at 21.4 million -- up from 8.8 million in the first K6, and 9.4 million for the K6-II -- and incorporated AMD's PowerNow!, which dynamically altered clock speeds according to workload. With clock speeds eventually reaching 570MHz, the K6-III was fairly expensive to produce and had a relatively short life span cut short by the arrival of the K7 which was better suited to compete with the Pentium III and beyond.
The arrival of AMD's K7 (usually known by its model name of Athlon) in 1999 represents the pinnacle of the company's golden age. Starting at 500MHz, Athlon CPUs utilized the new Socket A and a new internal system bus licensed from DEC that operated at 200MHz, eclipsing the 133MHz Intel offered at the time. June 2000 brought the Athlon Thunderbird, a CPU cherished by many for its overclockability, which incorporated DDR RAM support and a full speed L2 on-die cache.
Thunderbird and its successors, Palomino, Thoroughbred, Barton and Thorton, battled Intel's Pentium 4 throughout the first five years of the millennium, usually at a lower price point. Athlon was joined in September 2003 by the K8 (codenamed ClawHammer), better known as Athlon 64 because it added a 64-bit extension to the x86 instruction set.
This brief episode is usually cited as AMD's defining moment. While AMD was surging, the speed-at-any-cost approach of Intel's Netburst architecture (particularly with the Pentium 4 family) was being exposed as an exercise in hubris.
So, what happened? Why didn't AMD continue on the path to greater glory? This is generally where the heated debate starts...
Get free exclusive content, learn about new features and breaking tech news.