Intel’s Triton Chipsets Explained – their history, architecture and development

Triton430FX Introduced in early 1995, the 82430FX – to give it its full name – was Intel’s first Triton chipset and conformed to the PCI 2.0 specification. It introduced support for EDO memory configurations of up to 128MB and for pipelined burst cache and synchronous cache technologies. However, it did not support a number of emerging technologies such as SDRAM and USB and was superseded in 1996 – little more than a year after its launch – by a pair of higher performance chipsets. Triton430VX The Triton 430VX chipset conforms to the PCI 2.1 specification, and is designed to support Intel’s Universal Serial Bus (USB) and Concurrent PCI standards. With the earlier 430FX, a bus master (on the ISA or PCI bus), such as a network card or disk controller, would lock the PCI bus whenever it transferred data in order to have a clear path to memory. This interrupted other processes, and was inefficient because the bus master would never make full use of the 100 MBps bandwidth of the PCI bus. With Concurrent PCI, the chipset can wrest control of the PCI bus from an idle bus master to give other processes access on a timeshare basis. Theoretically, this should allow for data transfer rates of up to 100 MBps, 15% more than the 430FX chipset, and smooth intensive PCI tasks such as video playback when bus masters are present. The 430VX chipset was aimed fairly and squarely at the consumer market. It was intended to speed up multimedia and office applications, and it was optimised for 16-bit. Furthermore, it was designed to work with SDRAM, a...

Pin It on Pinterest