Squeeze on Cash...

How would you like to improve response times by an average of over 30%? This gain is at the cost of a small amount of CPU time and virtual storage, and doesn't require any time spent fiddling with sensitive system tuning parameters. This is a technique that very few mainframe users use; I have been investigating it myself in the last few weeks. The discussion will get reasonably technical at times, but please bear with me.

This looks like getting something for nothing; however, all that really happens is that more of one resource is used up, in order to get more of something else. In a large mainframe system, the top priority is response time of end-user terminals. This is made up of time spent in the main processor, actually carrying out a transaction, plus time spent in the 'network' transmitting the information between the processor and a user's terminal. A normal distribution between these two components might be 40% of time in the processor, 60% in the network. If we can spend a little extra processor time, which is usually very well managed, and save network response time, overall response times will improve. This is particularly important when the network is heavily loaded, as relieving a bottleneck here can give more improvement than expected.

As you probably realise, I am talking about data compression. The algorithms used to get optimal compression are well understood from PC data compression products. This isn't file compression, which is used a lot on PCs to save disk space, but compression of data being transmitted through VTAM. Remote connected terminals are most likely to be a bottleneck; less data transmitted will mean a faster response. Reducing the volume of data will also reduce the load on channels and terminal controllers, which again can improve response times.

The cost is a small amount of extra main processor time. Several packages are available from third party software suppliers to do this.

Compression is based on the structure of 3270 data streams. There are two main categories: inbound and outbound. Outbound compression means optimising what data is sent out; that means using efficient datastreams. If part of the datastream is already displayed, then skip the transmission; also skip repeated characters, and skip blanks. One of the 3270 data stream commands is a repeat character command: a four byte command can replace any number of blanks, box characters, or other repeated characters. This could be done by knowing the application to be optimised; or (easier but with more overhead) by building a copy of the display in a buffer.

A well written application, or a good 4GL, should do this already. But most don't; and a lot of code running today is very old (10 - 20 years); you can bet that this code is not going to be well written!

This is quite easy to write a utility to do; either in CICS (which is where most home-grown ones work) or even in VTAM.

Inbound is different. It means sending information to the screen in such a way that as little as possible comes back. In a 3270 data stream, part of each field is called the Modified Data Tag (MDT). If the MDT is set, then the contents of the field will be sent back to the application, even if the data hasn't changed. This is done because programmers are lazy. But some high level languages also work this way.

So, if you want to do inbound data compression, you could write an exit to strip out all MDTs on outbound data, store the screen data sent, and rebuild it according to the stored MDTs when it comes back.

Once again, this can be done at CICS (or IMS) level, or at VTAM level. The advantage of doing it at the VTAM level is that you can do a couple of other tricks that will help performance.

Everyone knows that Session Managers have a massive overhead on the system. Part of that is because they buffer all screen displays, and so increase storage usage; but a lot of it is because they use the VTAM READBUF request to get this status information. A VTAM level compression utility is in a good position to emulate READBUF responses. This is something that Session Managers do not handle themselves, so it would improve response for any of these.

What else can be done? Use genuine data compression (Lempel-Ziv, LZW, etc) on a complete buffer. This requires the ability to decompress data at the receiving end, which 3270 terminals are not intelligent enough to do. However, 3274 (and other) controllers are smart enough. I am surprised that no one has written a utility to live inside the NCP (Node Controller Program: an operating system for 3274 controllers). Of course, if PCs are emulating terminals, the emulator could be written to do this. Compressed data could be sent as a field in a standard 3270 data stream; or (this might be too sophisticated for the software companies) all data could be sent via LU 6.2 as the transport mechanism.

What does this buy you? Several commercial products do parts of what I have described above; several products from BMC, the SuperOptimiser series, come to mind; as does VTAM/Express from SofTouch. What I have seen is a reduction in traffic of 10% on mixed TSO data streams, for inbound and outbound compression as described above. This is actually quite good, as TSO is reasonably well optimised already, and has very unpredictable output. This goes up to 40 to 60% for a heavy CICS system, where most of the transactions are the same, comprising old code written in-house some years ago. I would guess that an NCP or emulator compressor would add another 20 to 30% to this.

If your priority is to maintain end-user service levels, which means remote terminal response times, VTAM level compression