You must prepare your OpenVMS system to run the server software
so that your system can properly interact with the PC running the
client software. The procedures include the following:
Set up in a mixed-architecture cluster
environment (if applicable).
Start the server on other nodes.
Update the printer and storage database.
Allow OpenVMS Management Station to control the
printer and storage environment.
Keep your printer environment up to date.
Check if running third-party TCP/IP stacks.
Determine and report problems.
Setting Up in a Mixed-Architecture
Cluster Environment |
 |
The OpenVMS Management Station server creates several configuration
files:
TNT$JOURNAL.TNT$TRANSACTION_JOURNAL
TNT$MONITOR.TNT$MONITOR_JOURNAL
- �
In a common-environment cluster with one common system disk,
you use a common copy of each of these files located in the SYS$COMMON:[SYSEXE]
directory on the common system disk, or on a disk that is mounted by
all cluster nodes. No further action is required.
However, to prepare a common user environment for an OpenVMS
Cluster system that includes more than one common system disk, you
must coordinate the files on those disks.
The following rules apply:
Disks holding common resources must
be mounted early in the system startup procedure, such as in the SYLOGICALS.COM
procedure.
You must ensure that the disks are mounted with
each cluster reboot.
Follow these steps to coordinate files:
Decide where to locate the files. In a cluster
with multiple system disks, system management is much easier if
the common system files are located on a single disk that is not
a system disk.
Copy the following files from SYS$COMMON:[SYSEXE] to
a directory on a disk other than the system disk: TNT$UADB.DAT,
TNT$ACS.DAT, TNT$MONITOR.DAT, TNT$MONITOR.TNT$MONITOR_JOURNAL, TNT$EMERGENCY_MOUNT.COM,
and TNT$JOURNAL.TNT$TRANSACTION_JOURNAL.
Edit the file SYS$COMMON:[SYSMGR]SYLOGICALS.COM on
each system disk and define logical names that specify
the location of the cluster common files.
Example
To place the files on $1$DJA15, define logical names as follows:
$ DEFINE/SYSTEM/EXEC TNT$ACS - _$ $1$DJA15:[VMS$COMMON.SYSEXE]TNT$ACS.DAT
|
TNT$EMERGENCY_MOUNT.COM is created in SYS$SYSTEM or in the
directory pointed to by the TNT$ACS logical, if the logical exists.
$ DEFINE/SYSTEM/EXEC TNT$UADB - _$ $1$DJA15:[VMS$COMMON.SYSEXE]TNT$UADB.DAT /
$ DEFINE/SYSTEM/EXEC TNT$JOURNAL - _$ $1$DJA15:[VMS$COMMON.SYSEXE]TNT$JOURNAL.TNT$TRANSACTION_JOURNAL $ DEFINE/SYSTEM/EXEC TNT$MONITOR - _$ $1$DJA15:[VMS$COMMON.SYSEXE]TNT$MONITOR.DAT $ DEFINE/SYSTEM/EXEC TNT$MONITORJOURNAL - _$ $1$DJA15:[VMS$COMMON.SYSEXE]TNT$MONITOR.TNT$MONITOR_JOURNAL
|
To ensure that the system disks are mounted correctly
with each reboot, follow these steps:
Copy the SYS$EXAMPLES:CLU_MOUNT_DISK.COM file
to the [VMS$COMMON.SYSMGR] directory, and edit the file for your
configuration.
Edit SYLOGICALS.COM and include commands to mount, with
the appropriate volume label, the system disk containing the shared
files.
Example
If the system disk is $1$DJA16, include the following command:
$ @SYS$SYSDEVICE:[VMS$COMMON.SYSMGR]CLU_MOUNT_DISK.COM - _$ $1$DJA16: volume-label
|
Starting the Server
on Other Nodes |
 |
If you plan to run OpenVMS Management Station on more than
one node in an OpenVMS Cluster without rebooting, you need to start
the software on those nodes.
Use SYSMAN to start the server as follows:
�
$ @SYS$STARTUP:TNT$STARTUP.COM
|
Or you can log in to each node that shares the SYS$COMMON:
directory and enter the following command:
$ @SYS$STARTUP:TNT$STARTUP.COM
|
If you are performing an upgrade or a reinstallation and OpenVMS
Management Station is already running on the node, add the RESTART
parameter to the startup command, as follows:
$ @SYS$STARTUP:TNT$STARTUP.COM RESTART
|
Error Log Information |
 |
OpenVMS Management Station writes error log information to
the file TNT$SERVER_ERROR.LOG. This error log is created in the
SYS$SPECIFIC:[SYSEXE] directory. If you start the OpenVMS Management Station
server on multiple nodes in a cluster, which is recommended, multiple
server error logs are generated.
Updating the Printer
and Storage Database |
 |
When you install OpenVMS Management Station, the installation
starts the OpenVMS VManagement Station server on the installation
node. If this installation is an upgrade, the server converts the
existing OpenVMS Management Station database to the latest V3.*
format. If this is a new installation, the server creates an initial
version of the database file TNT$ACS.DAT and starts the update functions
automatically.
To complete the database, start the OpenVMS Management Station
server on each node in your cluster. The instances of the server
communicate with each other to determine device, queue, and volume
information, and the server must be running on each node for this
communication to take place.
Editing the System
Files |
 |
To start the OpenVMS Management Station server from your system
startup files, insert one of the following commands into your system
startup procedures (probably SYS$MANAGER:SYSTARTUP_VMS.COM) after both
the Queue Manager and network are started but immediately before
the ENABLE AUTOSTART/QUEUES command.
Command | Parameter 1 | Parameter 2 | Description |
---|
@TNT$STARTUP | blank | n/a | Starts
the server. Does not start printer queues or mount volumes. |
@TNT$STARTUP | RESTART | n/a | Shuts down
a running server, then starts the server. Does not start printer
queues or mount volumes. |
@TNT$STARTUP | BOOT | blank | Starts
the server. Starts any printer queues that are not yet started and
are managed by OpenVMS Management Station. Does not mount volumes
managed by OpenVMS Management Station. |
@TNT$STARTUP | BOOT | ALL | Starts
the server. Starts any printer queues that are not yet started and
are managed by OpenVMS Management Station. Mounts any volumes that
are not yet mounted and are managed by OpenVMS Management Station. |
@TNT$STARTUP | BOOT | PRINTERS | Starts
the server. Starts any printer queues that are not yet started and
are managed by OpenVMS Management Station. Does not mount volumes
managed by OpenVMS Management Station. |
@TNT$STARTUP | BOOT | STORAGE | Starts
the server. Mounts any volumes that are not yet mounted and are
managed by OpenVMS Management Station. Does not start any printer
queues. |
�
Note that the effect of TNT$STARTUP BOOT, with no second parameter,
has not changed from earlier releases. This command starts any
printer queues that are not yet started and are managed by OpenVMS Management
Station, but it does not mount any volumes.
Add the following command line to the system shutdown file,
SYS$MANAGER:SYSHUTDWN.COM:
$ @SYS$STARTUP:TNT$SHUTDOWN.COM
|
Controlling the Printer
and Storage Environment |
 |
It is not necessary to remove your existing queue startup
and volume mount DCL procedures immediately. The OpenVMS Management
Station server recognizes that you started a queue or mounted a
volume with your command procedures and assumes that you want it
that way.
As you become familiar with the server's management ability,
you can remove or comment out the DCL commands and procedures that
perform these tasks and allow OpenVMS Management Station to control
your printer and storage environment.
In addition, the OpenVMS Management Station server periodically
(every 24 hours) generates a DCL command procedure that includes
the commands to mount all of the volumes managed by OpenVMS Management
Station. If you are familiar with DCL, you can look at this command
procedure to see what actions OpenVMS Management Station performs
for you. In the event of an unforeseen system problem or a corrupt
server database (SYS$SYSTEM:TNT$ACS.DAT), you 3can use this command
procedure to mount the volumes.
The name of the generated file is TNT$EMERGENCY_MOUNT.COM.
TNT$EMERGENCY_MOUNT.COM is created in SYS$SYSTEM or in the directory
pointed to by the TNT$ACS logical, if that logical name exists.
The OpenVMS Management Station server limits TNT$EMERGENCY_MOUNT.COM
to seven versions.
Keeping Your Printer
Environment Up to Date |
 |
The OpenVMS Management Station server installation creates
a file named SYS$STARTUP:TNT$UTILITY.COM. This command procedure
scans the OpenVMS system and updates the database of known printers,
queues, and related devices.
When Is the Database
Updated?
The database is updated:
As part of the OpenVMS Management
Station installation.
When you specifically start TNT$UTILITY.COM.
At periodic intervals as a server background thread.
Two logical names control how often this server thread runs:
Logical Name | Description |
---|
TNT$PRINTER_RECON_INTERVAL | How often
the thread should run, in minutes, from when the server was last
started on this node. If you do not define this logical, the default
value is 1440 minutes (24 hours). |
TNT$PRINTER_RECON_INTERVAL_MIN | The minimum
number of minutes that must elapse before the thread should run
again, starting from when the database was last updated. If you
do not define this logical, �the default value is 60 minutes (1 hour). |
You can think of these logicals as meaning “run the
thread this often (TNT$PRINTER_RECON_INTERVAL), but make sure this
much time has elapsed since the database was last updated (TNT$PRINTER_RECON_INTERVAL_MIN).”
Because you can run TNT$UTILITY.COM yourself, and because
the OpenVMS Management Station server also updates the database,
the TNT$PRINTER_RECON_INTERVAL_MIN logical prevents the database from
being updated more frequently than is actually needed.
If you want to change the defaults for one of these logicals,
define the logical on all nodes on which the OpenVMS Management
Station server is running.
Do You Need to Run
TNT$UTILITY.COM Manually?
If you use OpenVMS Management Station to make all of the changes
to your printer configuration, the configuration files are immediately
modified to reflect the changes and you probably do not need to specifically
run the TNT$UTILITY.COM procedure.
However, if you or someone else uses DCL to make a change—for
example, if you use the DELETE /QUEUE command to delete a queue—the
configuration files are not synchronized. In this case, the OpenVMS Management
Station client advises you to run the TNT$UTILITY.COM procedure
to resynchronize the database.
Run the following procedure on one node in the cluster to
make the database match your system:
$ @SYS$STARTUP:TNT$UTILITY.COM UPDATE PRINTERS
|
e
For example, if you or someone else uses DCL to delete a queue,
you need to delete that queue from the database. The TNT$UTILITY.COM
procedure assumes that your system is set up and running the way
that you want it to, so you should fix any problems before you run
TNT$UTILITY.COM.
What Are the Requirements
for Running TNT$UTILITY.COM?
You need the SYSNAM privilege to run TNT$UTILITY.COM.
The TNT$UTILITY.COM procedure connects to the OpenVMS Management
Station server on the current OpenVMS system to determine device
and queue information. Therefore, the OpenVMS Management Station
server must be running on the node where you run TNT$UTILITY.COM.
The OpenVMS Management Station server then connects to the
other OpenVMS Management Station servers in the OpenVMS Cluster
to determine device and queue information. It is generally a good
idea to keep the OpenVMS Management Station server running on the
other nodes in an OpenVMS Cluster to keep the database up to the
minute.
However, if the OpenVMS Management Server is not able to connect
to the OpenVMS Management Station server on a given node, it uses
the known information about that OpenVMS node from the database.
That is, in the absence of a valid connection to that OpenVMS node,
the information in the database is assumed to be correct.
Keeping Your Storage
Environment Up to Date |
 |
The TNT$UTILITY.COM utility accepts parameters (UPDATE STORAGE)
to update the storage �
database. However, the storage database is
updated dynamically every time you use the OpenVMS Management Station
client to perform a storage management operation. Therefore, you
do not need to run TNT$UTILITY.COM to update the storage database.
Enabling Disk Quotas |
 |
Before installing OpenVMS Management Station, you might have
disabled disk quotas on the SYSTEM disk. If so, reenable the quotas
and then rebuild to update quota information by entering the following
commands:
$ RUN SYS$SYSTEM:DISKQUOTA DISKQUOTA>ENABLE DISKQUOTA>REBUILD DISKQUOTA>EXIT
|
Caching Storage Configuration
Data |
 |
OpenVMS Management Station uses two logical names to determine
how often to refresh cached (in-memory) storage configuration data.
TNT$PURGE_CYCLE_LATENCY—Determines
how often (in seconds) to wait after purging stale device reports
before purging again. This value affects how frequently the clusterwide
data (maintained by a master server) is updated in memory.
min = 180 default = 1800 (30 minutes) max = 18000 (5 hours)
|
TNT$LOCAL_SURVEY_LATENCY—Determines the
delay (in seconds) from one node-specific device survey to the next.
This value is independent of clusterwide surveys requested by the
master server when performing a purge.
min = 6 default = 60 (1 minute) max = 600 (10 minutes)
|
For both logical names, smaller values result in the OpenVMS
Management Station server consuming more CPU cycles in periodic
purges or surveys.
If you do not accept the defaults, you might find that larger
OpenVMS Cluster systems perform better with values on the high end
of the allowed range.
If you do not define these logicals, the OpenVMS Management
Station server uses the default values. If you do define these
logical names, the values are used only if they are within the accepted
range.
Running Third-Party
TCP/IP Stacks |
 |
TCP/IP Services for OpenVMS Version is the only supported
TCP/IP stack. Additional stacks have not been tested. However,
TCP/IP stacks that are 100 percent compliant with the QIO interface
for TCP/IP Services for OpenVMS should also work. (Contact your
TCP/IP vendor for additional information and support issues.)
For the best chance of success, check the following:
Make sure that the QIO service (for
example, UCXQIO) is enabled.
RFor TCPware (from Process Software Corporation),
also make sure that the TCPware UCX$IPC_SHR.EXE is an installed
image.
Also for TCPware, make sure you are running a version
of TCPware that correctly implements a DEC C-compatible socket interface.
Determining and Reporting
Problems |
 |
If you encounter a problem while using OpenVMS Management
Station, please report it to HP. Depending on the nature of the
problem and the type of support contract you have, you can take
one of the following actions:
If your software contract or warranty
agreement entitles you to telephone support, call HP.
If the problem is related to OpenVMS Management
Station documentation, use the Internet address listed in the preface
of this manual to send us your comments.
Removing the OpenVMS
Management Station Server |
 |
When you execute the OpenVMS installation or upgrade procedure,
the OpenVMS Management Station server software is <automatically
installed on your OpenVMS system disk. If this server software
is later reinstalled using another kit (for example, a kit downloaded
from the Web or a patch kit), you have the option to remove OpenVMS
Management Station. If you use the PCSI utility to remove OpenVMS
Management Station from the OpenVMS system, the following files
are not removed:
TNT$JOURNAL.TNT$TRANSACTION_JOURNAL
�
Do not delete these files unless you have already removed
OpenVMS Management Station.