This site uses advanced css techniques
A customer has a chain of retail stores across the country, and each had long used the traditional point-of-sale credit card terminals for accepting charges. This involved 30-45 seconds of delay while the terminals made their phone call to the processor. This made for a less than satisfactory experience for the purchaser who stood at the counter waiting for the transaction to complete (and those who stood in line behind).
All the stores are connected to the main office via IP-based frame relay or VPN circuits, and we believed that we could dramatically speed up the authorizations by using this network along with a dediciated circuit to the card processor.
I created the server process that ran on the company's main UNIX server, and it managed the single 56k serial line to the credit card processor and listened for TCP/IP connections from clients. It would accept a connection, receive an authorization request, queue it to the processor, and route the response back to the client. We turned a 30-45 second transaction into a 3-5 second one.
The server was written in C (later in C++), and it used a single-threaed model to manage all the communications. The TCP network I/O was straightforward enough and followed the customary server paradigm (listen / accept / read / write / close), but the serial protocol was a seriously complicating factor.
The processor used a lousy serial polling protocol that had some terrible timing windows that made data loss likely: we had to bend over backwards to avoid those timing windows. This was made even more interesting by our decision not to use multi-threading: though it was well supported by Unixware, we decided that a portable, single-threading model that ran all the I/O through a select() loop would be more portable and easier to debug.
The program ran as a daemon in the background after being launched automatically at system boot time, and it maintained a shared memory segment that could be queried by a user-level program to show status of the server. A recent screen shot of the monitor program shows:
The clients - for Windows and UNIX - accepted a preformatted credit-card authorization request from the point-of-sale software, made TCP request to the server and waited for the response to come back the down the same connection. The clients were not terribly intelligent and knew very little about the server's inner workings.
The server had a large loop where it would accept requests from clients and put them on a queue while awaiting their trip to the card processor. Since we could only send one request down the serial line for each poll, a sudden inrush of authorization requests could "stack up" waiting for a ride to the processor. Any requests that were not sent to the processor within a fixed time period were returned to the client with an error message.
The overall flow in the server was:
The server of course managed timeouts and error conditions, and logging was done for nearly everything.
If the server lost polling for more than a certain time, it would stop accepting inbound connections altogether to allow future client connections to receive a "connection refused" message immediately rather than wait for the timeout. Because the clients all had the ability to try a list of servers, not just one, this failover capability turned out to work very rapidly. The customer had a second leased line to the CC processor that the backup server used for failover.
This software has been in continuous production for several years on multiple servers, and has processed hundreds of thousands of transactions.