[an error occurred while processing this directive]

HP OpenVMS Systems

OpenVMS Technical Journal V5
» 

HP OpenVMS Systems

OpenVMS information

» What's new on our site
» Upcoming events
» Configuration and buying assistance
» Send us your comments

HP OpenVMS systems

» OpenVMS software
» Supported Servers
» OpenVMS virtualization
» OpenVMS solutions and partners
» OpenVMS success stories
» OpenVMS service and support
» OpenVMS resources and information
» OpenVMS documentation
» Education and training

OpenVMS software

» Operating system
» OpenVMS clusters
» OpenVMS Galaxy
» e-Business products
» Opensource tools
» Networking
» System management
» Storage management
» Security products
» Application development and integration
» Software licensing
» SPD listings
» Whitepapers
» Ask the wizard
» Training
» OpenVMS books

Evolving business value

» Business Systems Evolution
» AlphaServer systems transition planning
» Alpha RetainTrust program

Related links

» HP Integrity servers
» HP Alpha systems
» HP storage
» HP software
» HP products and services
» HP solutions
» HP support
disaster proof
HP Integrity server animation
HP Integrity server animation
Content starts here

Are you Certifiable?

John Gillings

Overview

Certification has become a normal part of the IT industry. It provides a means for people to validate their skills and competencies, and for employers to gain some assurance that their employees really know what they’re doing. Typically gaining a certification involves passing an exam that tests a set of competencies in a particular product or technology. In OpenVMS, there are three levels of certification:
  • Certified Systems Administrator - CSA OpenVMS v7
  • Certified Systems Engineer - CSE OpenVMS v7
  • Accredited Systems Engineer - ASE AlphaServer + OpenVMS v7

The underlying exams for these certifications are designed to test experience in OpenVMS (rather than an ability to "cram").

This paper discusses the process used to create the exams and to show why the exams are valid and worthwhile. It should also give you some insight into what to expect when taking a certification exam and provide some tips on exam preparation and technique.

Exams

Although not ideal for all purposes, multiple-choice exams are the only practical and cost-effective means for en-masse testing. They have the advantage of objectivity, repeatability and automation of grading. The same exam may be given anywhere on the planet without concern for biases. They are, therefore, the standard means for examining certification candidates.

In the past, the creation of multiple-choice exams was fairly hit or miss. Just make up a bunch of questions and hope they test what you want to test. Today, it's done using a highly structured methodology, with feedback and statistical validation. A multidisciplinary team consisting of statisticians, psychometricians (who deal with generic issues surrounding exams), and Subject Matter Experts (SMEs) create the exams. In the case of OpenVMS exams, SMEs were drawn from both HP internal and external sources. Specialists were from Services, Engineering, Education, Presales and external Partners.

The Process

Stating the obvious - the most important aspect of creating an exam is to know exactly what it is you want to examine. The first step in the process is to produce a "Competency Model", which is a list of competencies and skills that you expect the candidate to have mastered. This model is tree structured, starting with general areas and branching down to specifics.

Once the competency model is complete, the next step is to weight the branches according to importance. For example, two branches in an OpenVMS competency model might be "Queuing" and "Security". Security might be deemed more important and, therefore, given a higher weighting. This leads to an exam blueprint, which is essentially the competency model with percentages attached to each branch and distributed down to the leaves. The blueprint also determines the size of the exam - that is the duration and number of questions with which the candidate is presented.

From the exam blueprint, the SMEs can start writing questions (known as items in exam jargon). Each item addresses a specific competency objective from the blueprint, and the number created for each competency branch is determined from the weightings in the blueprint. The typical target is for two versions of the exam (or forms in exam jargon) so that a candidate taking the exam a second time receives a different set of questions. Because many items are expected to be "lost" (for a variety of reasons, covered below), it's usually necessary to create at least three times the final number of items required for one form. Items must be distributed according to the blue print weightings, but note that in the final exam, there may not be enough questions to cover all competencies. The exam is really just a sample, rather than a comprehensive coverage of all competencies.

Once the item pool is complete, it is offered for beta test. Volunteer candidates are invited to take the exam to see how it performs. There are three parts to the beta test - first a demographic survey (to determine the candidates' self-assessed level of skill in the product); second, answering all items in the complete item pool; and third, comments and feedback.

Analyzing Beta Test Results

At the completion of the beta test, the results are analyzed. Candidates are divided into groups according to their results and demographics. We expect those who report more experience in the product to receive better marks than those with little or no experience.

Individual items are examined statistically to see how they performed across the beta group. The simplest statistical measure is called p, the percentage of candidates who answered the item correctly.

The second measure is called the point-biserial. It's a kind of a correlation co-efficient. Items that tend to be answered correctly by the high-scoring candidates and incorrectly by low-scoring candidates have higher positive values. These items discriminate "good" from "bad" candidates. Those with a flat distribution don't discriminate between the groups and have low or zero point-biserial values. Sometimes an item may have a negative value, indicating that the low-scoring candidates answered correctly while the high scorers did not.

The third measure is called r. It measures a confidence interval for the distribution of the item answers.

Items that fall outside threshold values for these three measures are dropped from the pool. For example, Items with p < 25% are considered "too hard" (or perhaps the expected answer is incorrect) and those with p > 90% are considered "too easy". Similarly, those items with low or negative point-biserial, or low r, are rejected. Any remaining items, over the target requirements are then considered. Better performing items are retained, subject to maintaining the target weightings from the exam blueprint.

For the OpenVMS exams, most items passed the validity tests, so the rejection rate was surprisingly low. Therefore, numerous items, which were well within acceptable performance levels, had to be rejected, because they were surplus to the requirements. Many are included in the Exam Preparation Guides as sample/practice items. EPGs are available from the HP certification web site: http://www.hp.com/certification/

Once the final item pool has been selected, statisticians distribute items among the exam forms, so that they can show statistically that a given candidate is expected to obtain the same mark for each exam form.

The exam team then examines the results, looking in particular at the candidates in the middle of the sample. Using both the results and the demographics of the candidates, a pass mark is selected.

The exam is then released.

Example 1 provides an item that was part of the beta test.


Example 1 - Sample item with Beta Statistics

Example 1 is a real exam item with beta test statistics. The correct option (C) has a positive point biserial, and all incorrect choices have negative point biserial. P=.68 for the correct answer makes this a reasonably easy question, but it clearly discriminates the strong from the weak candidates. This item was dropped from the final exam pool because there were better performing items available to cover this competency objective.

Writing an item

More jargon! An item consists of the question (known as the stem), and some number of potential answers (choices), one of which is correct and the others are distracters. In practice, generating the stem and the correct answer is fairly easy. Finding good distracters is the hard part.

The science of psychometrics gives us some clues to help generate good items. Here are some examples of rejected items that (negatively) demonstrate some of the factors in creating good exam items.

What's wrong? - example 2


Example 2 - Length rule

Although Example 2 is a perfectly valid question, and germane to OpenVMS management skills, the psychometricians will tell us that just about ANYONE, regardless of their OpenVMS skills, would have guessed "C" as the correct answer, simply because it's substantially longer than all the other options. In general, avoid anything that makes the right answer look different from the other choices.

What's wrong? - example 3


Example 3 - One right, all the rest wrong

In Example 3, the answer is somewhat subjective. The expected correct answer is choice B (which is arguably the "most correct"), but none of the other choices, (even D) are definitely wrong. To be valid, items must have only one objectively correct answer.

What's wrong? - example 4


Example 4 - Trivia quiz

Although the correct answer for Example 4 is B, this is really a trivia question. If you don't "just know" the answer, there's no way to derive the correct response using your knowledge and experience. Questions like this tend to perform poorly. Because we’re not trying to examine rote memory, this type of question does not contribute to the objectives of the exam.

Guidelines

Some other psychometric guidelines include:

  • Avoiding negatives in the stem, for example, "Which command is not..."
  • Avoiding culture specific terminology (slang words, jargon, references to holidays,Latin abbreviations i.e.; via; sic; status quo; bona fide; et al, culture specific geography)
  • Avoiding acronyms and abbreviations

Other item formats that have proven to perform poorly are:

  • True/false questions
  • Choices "all of the above" and "none of the above"
  • Any choice that includes references to other choices like "A and C"

The item writing team includes a psychometrician who ensures that all items conform to the writing standards. The team includes members from around the world, which helps ensure that items are as culture neutral as possible (also note that the beta results include analysis to detect any culture bias in the results).

Taking Exams

Before taking an exam, make sure you've read the Exam Preparation Guide from http://www.hp.com/certification/. The EPG contains references to many manuals. When an item is written, it's necessary to provide a reference to validate the correct answer. All referenced manuals are included in the EPG for the exam. Be aware that for some items, even though the answer was well known, finding a reference involved citing an obscure manual. For example, one of the references for the 651 exam is HP POLYCENTER Software Installation Utility Developer's Guide. Clearly we don't expect everyone to be familiar with developing PCSI kits. So, regarding manuals:

  • Don't imagine you can just read all the manuals - you can't!
  • If you're already familiar with the manuals, you probably already have the concepts.
  • Don't worry if you haven't read them all from cover to cover.
  • Reviewing the listed sections is a good idea.

Once you're satisfied that you're familiar with all the concepts listed in the EPG, find the testing center in your area. While taking the exams:

  • Read each item carefully.
  • Select the best answer(s) presented.
  • Be mindful of the total amount of time allowed to complete the exam.
  • Rule of thumb is to allow 1 minute per item on a standard test item.
  • You can mark an item to review it later.
  • Questions will NOT require any complex calculations.
  • If in doubt, eliminate wrong answers and guess from the remainder - no penalty for being incorrect.
  • "Review marked" and "Review incomplete" at end phase of exam.

What are you waiting for?

With this brief look at the process for creating an exam, I hope I've convinced you that the OpenVMS certification exams are a valid measure of your OpenVMS skills and that certification is a worthwhile goal. Good Luck!

Figure 1 - The goal!

For more information

Refer to the following web site for Information about OpenVMS certification:

» http://www.hp.com/certification

An external course offering intended to prepare for certification can be found at:

» http://www.parsec.com/openvms/index.asp?info=openvmstrack1.html