1997 saw the arrival of the 56 Kbit/s modem, despite the absence of any international standard for this speed. The K56Flex group of companies, including 3Com, Ascend, Hayes, Motorola, Lucent and Rockwell, used Rockwell chipsets to achieve the faster speed, while companies like US Robotics used its own x2 technology. The two systems were not compatible, forcing users and Internet Service Providers (ISPs) to opt for one or the other. Moreover, there are basic limitations to 56K technology. It uses asymmetric data rates and thus can achieve high speeds only when downloading data from such as an ISP’s server.
Most telephone central offices (CO), or exchanges in this and almost every other country around the world are digital, and so are the connections between COs. All ISPs have digital lines linking them to the telephone network (in Europe, either E1 or ISDN lines). But the lines to most homes and offices are still analogue, which is a bugbear when it comes to data exchange: they have limited bandwidth and suffer from line noise (mostly static). They were designed to transfer telephone conversations rather than digital data, so even after compression there is only so much data that can be squeezed onto them. Thus the fatuity that digital data from a PC has to be converted to analogue (by a modem) and back to digital (by the phone company) before it hits the network.
56K makes the most of the much faster part of the connection – the digital lines. Data can be sent from the ISP over an entirely digital network until it reaches the final part of the journey from a local CO to the home or office. It then uses pulse code modulation (PCM) to overlay the analogue signal and squeeze as much as possible out of the analogue line side of the connection. However, there is a catch: 56K technology allows for one conversion from digital to analogue, so if, by chance, there is a section in the connection which runs over analogue and then returns to digital, it’ll only be possible to connect at 33.6 Kbit/s (maximum).
The reason it’s not possible to upload at 56K is simply because the analogue lines are not good enough. There are innumerable possible obstacles to prevent a clear signal getting through, such as in-house wiring anomalies, varying wiring distances (between 1-6Km) and splices. It is still theoretically possible to achieve a 33.6 Kbit/s data transfer rate upstream, and work is being carried out to perfect a standard that will increase this by a further 20 to 30%. Another problem created by sending a signal from an analogue line to a digital line is the quantisation noise produced by the analogue-to-digital (ADC) conversion.
The digital-to-analogue conversion (DAC) can be thought of as representing each eight bits, as one of 256 voltages – a translation done 8000 times a second. By sampling this signal at the same rate, the 56 Kbit/s modem can in theory pass 64 Kbit/s (8000×8) without loss. This simplified description omits other losses which limit the speed to 56 Kbit/s.
There is also some confusion as to the possible need to upgrade the PC serial port to cope with 56 Kbit/s operation. These days this usually uses the 16550 UART chip, itself once an upgrade to cope with faster modems. It is rated at 115 Kbit/s but 56 Kbit/s modems can overload it because they compress and decompress data on the fly. In normal Internet use data is mostly compressed before being sent, so compression by the modem is minimal.
On 4 February 1998 the ITU finally brought the year-long standards battle to an end by agreeing a 56 Kbit/s standard, known as V.90.
After months of deadlock the ITU finally agreed a 56 Kbit/s standard, known as V.90, in February of 1998. Though neither K56Flex nor x2, the V.90 standard uses techniques similar to both and the expectation was that manufacturers would be able to ship compliant product within weeks rather than months. The new standard was formally ratified in the summer of 1998, following a several months approval process.
It’s amazing in favor of me to have a web page, which is helpful for my knowledge.