 |
» |
|
|
 |
Ask the Wizard Questions
Application implemetation with threads
The Question is:
Sir,
I am trying to ascertain, from a performance point of view ONLY, the
pro's and cons of developing a multi-threaded TCP/IP server
versus a asychronous TCP/IP server. The server in question is to run
on OpenVms systems ONLY, thus portability is a NON issue.
It is my belief, wrongly or otherwise, that a single threaded process
would offer SUPERIOR performance over a multi-threaded process, simply
because it is better to keep a single threaded asynchronous server
busy (and VMS does asynchronous i/o well), as opposed to having the
overhead of internal thread context switching for a multi-threaded
process.
Assuming I 'am correct, is this statement still TRUE with
OPENVMS 7.0, which I believe is a Multithreaded O.S.
I would appreciate any feedback you have
In addition, are you aware, or have Digitial performed, any imperical
studies, where single threaded asynchronous processes are compared to
multithreaded processes ?
The Answer is:
Using DECthreads, thread context switches are relatively inexpensive.
So, if one uses DECthreads the overhead is not that high... this is
because the context switch is done completely in user mode without a
trip into kernel mode. Prior to V7.0, a VMS process could never execute
on more than one CPU at a time... so, even though DECthreads could
create large numbers of user threads, only one thread could be
executing at any one time. These user threads were multiplexed on the
process' single execution context or kernel thread.
With V7.0 (Alpha only), a process can now execute on as many CPUs as
the system has, so instead of a single kernel thread a process can now
have several. DECthreads treats the kernel threads as virtual CPUs and
schedules the user mode threads to run on them. The VMS EXEC then
schedules the kernel threads to run on physical processors. So, when a
user thread blocks, DECthreads regains control and can schedule a new
thread, the Exec is not involved.... when DECThreads has no user
threads to schedule on a particular kernel thread, it hibernates...
So, if a thread does an IO, which requires a trip into the Exec, on the
way back out to user mode, the Exec determines if the thread needs to
block and if so, transfers control to DECthreads. This is referred to
as an upcall. DECthreads then context switches away from the blocked
thread and schedules another.
This is different from other kernel threading models in which every
user mode or application thread has an associated kernel thread. In
this model the scheduling is far more expensive because all thread
context switches would then be done by the Exec.
I can't really speak to the performance numbers question, I don't have
any DECthreads performance data... hmmm don't recall ever seeing any...
And we have no V7.0/pre-V7.0 performance comparisons for threaded
applications... yet.
|