Mainframes and Networks

Some years ago, the idea that a network of single-user computers might replace a mainframe was preposterous. But (continuing a theme from the last few columns) IBM's recent announcements have changed all of this, for the first time. IBM will support the essential building blocks of a 'real' open system in their ES/9000 range and in MVS.

Disregard anything that you hear about Open Systems (note the capitals, this time; you can hear it in the salesman's voice); every supplier has a convincing reason why only they have an "Open System". They are all good reasons, there is no doubt. But it all boils down to portable data and applications, and these you can only have if your systems can easily exchange data. The trouble is that there are as many different ways of connecting systems as there are suppliers. The system with probably (but only probably) the best support is NFS (Network File System) run with TCP/IP (Internet Protocol) over Ethernet. It's not the best, but more of the more tightly connected systems use it.

The first time that we heard people expounded the idea of networks of computers replacing mainframes, they were talking about local area networks of PCs, in the days when Novell was new, a file server was a 6 MHz IBM PC AT, and you needed a full time in-house engineer to run a LAN of eight workstations. The total processor power was close to that of a small mainframe (a 4331, let us say), and the cost wasn't far different!

The problem with those early LANs (one of the many) was that there was no way of getting the processors to share a workload, so you couldn't run a large batch run (a MRP program, let's say, or an oil-field analysis); you might have the processor power, but it just wasn't available. These systems were also incredibly fragile; someone kicking a cable, or one workstation hanging, could cause the entire system to lock up.

Despite all of the problems, PC LANs had a role to play. They were easier to pay for than a mini or mainframe (I can't say that they were any cheaper; but you could start with a small system for a little money, and it would grow into a big system by stages), and for some uses they did things that a mainframe couldn't handle.

For a small group of users working closely together, but each doing quite disparate tasks, a LAN had lots of advantages. Each task was suitable for one person, who could use the power of a workstation to speed up and improve their work. Each task might also need checking and approval by others in the group, so that the joint tasks would be coordinated. There is no point in spending months or years developing this applications on a mainframe; each one is small and unique. But the combined work is more valuable than each individual element. This is what has become known as "workgroup computing".

Mainframe sites found some uses for small systems, as well. One of the biggest workloads on any mainframe is usually development. If the computer department can offload development work onto smaller systems, then service to the users is better. This often means that they can call existing PCs into play, thus staving off a very expensive upgrade to the mainframe system.

Development work for MVS with CICS and DB2 or IMS can easily be carried out on a reasonably powerful PC. The developer gets much faster response than from the mainframe, and he can use identical tools to those on the mainframe. SPF/PC replaces ISPF for editing, and MicroFocus provides exact equivalent COBOL compilers, with support for CICS, IMS and DB2.

Real networks that can compete on even terms with mainframes are being introduced to organisations by the 'back door'. Get a few Sun workstations and connect them together. Install a multiuser Unix system running on a 386 PC, just for a small departmental application. Then a couple more in other departments. Before you know it, all of these separate small systems have been linked into one network that can exchange files, mail and ideas, separate from the mainframe system of corporate choice. A few dumb terminals attached to Unix minis have metamorphosed into networked workstations. The next step is that they connect to the Internet, a worldwide network of mainly Unix systems that supports a surprisingly high volume of mail traffic (20Mb per day!).

Costs? A PC LAN has always been much more expensive than an equivalent multi-user system. Even five years ago, the cost of PCs (the only 286 ATs, of course) in a typical organisation was equal to that of the mainframe system they used. The mainframe is now possibly the smallest part of the investment (and return on investment) of a large organisation.

Benefits - Interpersonal computing. Once you have a network of Etherneted Unix workstations (be they NeXTs, Suns, or 386/486s running SCO or the still to come ACE and Solaris Unixes), you gain access to a lot of local power. Typical users find that the excellent word processors and DTP systems available are a step beyond PC versions. Because they have a true network, based on an operating system that was developed for networks, file exchange becomes more reliable. Mail is the most quoted example, regular use of email in an organisation, once a suitable system is installed that users will make regular use of for other reasons, increases the 'span of control' of managers. Flatter organisations, with each manager able to deal with more subordinates, is the result., which means that the organisation is more efficient, cheaper, and can respond faster to change.

Email on NeXT is a step forward in this case. Users can send email that includes voice notes, graphics and even files. This produces a bigger change in work habits than may be expected. Instead of a manager submitting a budget proposal in straight text, he can mail the worksheet that he used to develop their budget to his manager. Because his manager can experiment with the data, he can be sure that the report isn't subtly biased to gain results that favour just the requester. Hence management teams can work more effectively with more trust.

Another potential demonstrated by the advanced research at NeXT Computer is in supercomputers. NeXT supply a 'demonstration' application called Zilla. Zilla allows you to schedule large programs to run on any underused processors in a network. The Japanese computer industry made a rare award to a foreigner for this pioneering work.

With the sheer power of modern workstations, it is not surprising that many companies are downsizing. Given that an average PC today has the same processor power (if not I/O speed) of a typical mainframe of ten years ago, this should be no surprise to anyone in the mainframe business.

New role of mainframe as a server; IBM predicted several years ago; DB2, distributed databases. The mainframe won't be used like a PC LAN file server; instead, it will mainly be used as a very fast transaction server for SQL calls. With DB2 putting all of it's processor drain onto the mainframe, there will be enough of a pool of processor power available to provide very fast processing of sophisticated SQL queries. By brute force, the distributed database problem will be solved.