A Load Balance Queue provides a way to use multiple printers for a single print queue. All jobs are normally sent to the main or load balance queue which then dispatches the jobs to server queues or printers that do the actual printing as they become available. You can also send jobs to the individual server printers if they have special processing or setups required for a particular job. Because all of the server printers are shared by the load balance queue, they are said to be in a printer pool.
Edit the printcap file so it have the contents indicated below, create the /tmp/lp2 and /tmp/lp3 files with 0777 permissions, use checkpc -f to check the printcap, and then use lpc reread to restart the lpd server.
# printcap lp:force_localhost lp:server :sd=/var/spool/lpd/%P :sv=lp2,lp3 lp2:force_localhost lp2:server :ss=lp :sd=/var/spool/lpd/%P :lp=/tmp/lp2 lp3:force_localhost lp3:server :ss=lp :sd=/var/spool/lpd/%P :lp=/tmp/lp2
The :sv=... option flags the queue as a load balance queue and lists the queues that are used for load balancing. The :ss=... option flags the queue as a server for a load balance queue and specifies the name of the load balance queue. When a job is sent to the load balance queue the lpd server checks to see which server queues are available and then the first one to become available.
Execute the following commands to print the /tmp/hi file and observe the results:
h4: {274} % lpq Printer: lp@h4 (subservers lp2, lp3) Queue: no printable jobs in queue Status: job 'papowell@h4+42' removed at 07:29:57.924 Server Printer: lp2@h4 (serving lp) Queue: no printable jobs in queue Server Printer: lp3@h4 (serving lp) Queue: no printable jobs in queue h4: {275} % lpr /tmp/hi h4: {276} % lpq Printer: lp@h4 (subservers lp2, lp3) Queue: 1 printable job Server: pid 4063 active Status: waiting for subserver to finish at 07:31:08.074 Rank Owner/ID Class Job Files Size Time 1 papowell@h4+62 A 62 /tmp/hi 3 07:31:07 Server Printer: lp2@h4 (serving lp) Queue: no printable jobs in queue Server Printer: lp3@h4 (serving lp) Queue: no printable jobs in queue h4: {277} % lpq Printer: lp@h4 (subservers lp2, lp3) Queue: no printable jobs in queue Status: no more jobs to process in load balance queue at 07:31:12.317 Server Printer: lp2@h4 (serving lp) Queue: no printable jobs in queue Server Printer: lp3@h4 (serving lp) Queue: no printable jobs in queue Status: job 'papowell@h4+62' removed at 07:31:10.311
The first lpq command shows how the status is displayed for a load balance queue - the queue and its server queues are shown as well. Next, we use lpr to print a job (job id papowell@h4+62). We then use a couple of lpq commands to see how the job is first sent to the lp queue, which then forwards it to the lp3 queue, which then processes it and removes it. (For purposes of demonstration we have artificially slowed down the operation of the load balance queue so that the jobs will remain in the queue for sufficient time for us to display their status.) We can send another job to the load balance queue:
h4: {278} % lpr /tmp/hi h4: {279} % lpq Printer: lp@h4 (subservers lp2, lp3) Queue: no printable jobs in queue Status: no more jobs to process in load balance queue at 07:37:17.953 Server Printer: lp2@h4 (serving lp) Queue: no printable jobs in queue Status: job 'papowell@h4+89' removed at 07:37:15.936 Server Printer: lp3@h4 (serving lp) Queue: no printable jobs in queue Status: job 'papowell@h4+81' removed at 07:36:40.116
This time we see that the job was put in lp2. The normal load balance queue operation is to use the server queues in round robin order.
While this simple configuration is suitable for a large number of configurations, there are situations where server queue must be chosen dynamically. For example, if the server queues are actually transferring jobs to remote clients then as soon as the job is sent to the remote client the queue appears empty and available for use. To correctly check to see if the queue is available, the status of the remote queue or destination of the server queue must be checked.
To handle this situation, a :chooser program or filter can be used. When the load balance queue is trying to decide where to send a job, it first checks the server queues to see if they are enabled for printing. If a :chooser program is specified in the load balance queue printcap entry, then it is started with the normal filter options and environment variables, supplemented as discussed below. The :chooser program will read a list of candidate queues from its STDIN, write the chosen one to its STDOUT, and then exit. The lpd server checks the :chooser exit code - if it is zero (successful) then the chosen queue is used otherwise the exit code is used for the result value of processing the job. This allows the chooser process to not only control the destination of the job but also to hold, remove, or abort the job handling process. If the :chooser does not specify a queue, then the job is skipped and another job is chosen.
One side effect of the using a chooser program is that while there are jobs that can be processed in the queue the lpd server needs to periodically check to see if a server queue has become available. If it did this continually then a very high load would be put on the system. Instead, the chooser_interval option specifies a maximum time in seconds (default 10 seconds) between the times that the lpd server checks to see if there is an available server.
Normally, the chooser is applied to the first job in the queue. If the job cannot be printed then lpd will wait for the chooser_interval time. However, the chooser can also be used to direct jobs by their characteristics, or other criteria. This means that the entire spool spool queue has to be scanned for work. If the :chooser_scan_queue flag is set to 1, then all of the jobs are tested to see if they can be sent to an appropriate destination.
Edit the printcap file so it have the contents indicated below, create the /tmp/lp2 and /tmp/lp3 files with 0777 permissions. Then create the /tmp/chooser.script with the contents indicated below, and give it 0755 (executable) permissions. Make sure that the path to the head program used in chooser.script is correct. Use checkpc -f to check the printcap, and then use lpc reread to restart the lpd server.
# printcap lp:force_localhost lp:server :sd=/var/spool/lpd/%P :sv=lp2,lp3 :chooser=/tmp/chooser.script lp2:force_localhost lp2:server :ss=lp :sd=/var/spool/lpd/%P :lp=/tmp/lp2 lp3:force_localhost lp3:server :ss=lp :sd=/var/spool/lpd/%P :lp=/tmp/lp2 # /tmp/chooser.script #!/bin/sh echo CHOOSER $0 $@ >>/tmp/chooser set >>/tmp/chooser /usr/bin/head -1 exit 0
Now run the following commands:
h4: {280} % lpr /tmp/hi h4: {281} % lpq -lll Printer: lp@h4 (subservers lp2, lp3) Queue: no printable jobs in queue Status: CHOOSER selected 'lp3' at 14:02:50.605 Status: transferring 'papowell@h4+178' to subserver 'lp3' at 14:02:50.614 Status: transfer 'papowell@h4+178' to subserver 'lp3' finished at 14:02:50.624 Status: job 'papowell@h4+178' removed at 14:02:50.632 Status: starting subserver 'lp3' at 14:02:50.632 Status: waiting for server queue process to exit at 14:02:50.651 Status: subserver pid 10182 exit status 'JSUCC' at 14:02:52.872 Status: no more jobs to process in load balance queue at 14:02:52.879 Server Printer: lp2@h4 (serving lp) Queue: no printable jobs in queue Server Printer: lp3@h4 (serving lp) Queue: no printable jobs in queue Status: waiting for subserver to exit at 14:02:50.748 Status: subserver pid 10183 starting at 14:02:50.820 Status: accounting at start at 14:02:50.821 Status: opening device '/tmp/lp3' at 14:02:50.833 Status: printing job 'papowell@h4+178' at 14:02:50.834 Status: processing 'dfA178h4.private', size 3, format 'f', \ IF filter 'none - passthrough' at 14:02:50.838 Status: printing finished at 14:02:50.839 Status: accounting at end at 14:02:50.839 Status: finished 'papowell@h4+178', status 'JSUCC' at 14:02:50.841 Status: subserver pid 10183 exit status 'JSUCC' at 14:02:50.843 Status: lp3@h4.private: job 'papowell@h4+178' printed at 14:02:50.856 Status: job 'papowell@h4+178' removed at 14:02:50.871
As you see from the example above, the CHOOSER selected lp3 for use. Let us look at the /tmp/chooser file and see how the chooser.script program was run:
CHOOSER -Apapowell@h4+113 -CA -D2000-06-01-14:02:13.313 -Hh4.private \ -J/tmp/hi -Lpapowell -Plp -Qlp -aacct -b3 -d/var/tmp/LPD/lp \ -hh4.private -j113 -kcfA113h4.private -l66 -npapowell -sstatus \ -t2000-06-01-14:02:13.379 -w80 -x0 -y0 acct USER=papowell LD_LIBRARY_PATH=/lib:/usr/lib:/usr/5lib:/usr/ucblib HOME=/home/papowell PRINTCAP_ENTRY=lp :chooser=/var/tmp/LPD/chooser :lp=/tmp/lp :sd=/var/tmp/LPD/lp :server :sv=lp2,lp3 lp2=change=0x0 done_time=0x1 held=0x0 move=0x0 printable=0x0 printer=lp2 printing_aborted=0x0 printing_disabled=0x0 queue_control_file=control.lp2 server=0 spooldir=/var/tmp/LPD/lp2 lp3=change=0x0 done_time=0x2 held=0x0 move=0x0 printable=0x0 printer=lp3 printing_aborted=0x0 printing_disabled=0x0 queue_control_file=control.lp3 server=0 spooldir=/var/tmp/LPD/lp3 PS1=$ OPTIND=1 PS2=> SPOOL_DIR=/var/tmp/LPD/lp LOGNAME=papowell CONTROL=Hh4.private Ppapowell J/tmp/hi CA Lpapowell Apapowell@h4+113 D2000-06-01-14:02:13.313 Qlp N/tmp/hi fdfA113h4.private UdfA113h4.private
As you can see, the program is invoked with the same options as a normal filter. In addition, the printcap information for each server queue is passed in an environment variable with the name of the server queue. This means that if there is information needed by the chooser program to test for the availability of hardware, etc., this can be placed in the printcap information.
One of the limitations of using the :chooser program is that you may have a high overhead associated with running the program. The LPRng software provides support for linking in a user provided routine that does the same thing as the :chooser program. This routine has the following API or interface:
Printcap Option: chooser_routine chooser_routine@ - default - do not use chooser routine chooser_routine - use chooser routine Configuration: configure --with-chooser_routine=name --with-user_objs=objectfile.o defines the CHOOSER_ROUTINE compilation option to name includes the objectfile.o in the library. extern int CHOOSER_ROUTINE( struct line_list *servers, struct line_list *available, int *use_subserver ); servers: all subserver queues for this load balance queue available: subserver queues to choose from use_subserver: chosen subserver queue RETURNS: 0 - use the 'use_subserver' value as index into servers list for server to use != 0 - set job status to value returned.
See the LPRng/src/common/lpd_jobs.c and LPRng/src/common/user_objs.c files for details of the servers, available, and user_subserver parameters. The user_objs.c file provides a simple template that can be used as a starting point for a more complex routine. You should modify the code in the user_objs.c file and then use the configure options shown above to cause the user_objs.c file to be compiled and linked into the LPRng executables.