Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Criteria #ObjectsScore (1=poor, 5=excellent)
1Are they concrete?
2Can they be realized or instantiated, e.g., constructed and delivered?
3Is the object worded properly, clear and understanable understandable by clients?
4Can two or more raters score the object's complexity in a reasonable consistent manner using a simple "simple, average, complex" scoring criteria?
5Do they represent the totality of the project deliverables?

Object types
6Does the object type accurately categorize the object?
7Will the object type assigned to the object allow for comparison across multiple projects?

Milestones
8Are there enough milestones (at least 4)?
9Are the milestones spaced appropriately (at least one per month)?
10Can the client understand the milestone description?
11Are the milestone due dates a 50/50 estimate?

Tasks
12Do the tasks use a clear and easy to read verb-object format?
13Do the tasks clearly show how an object is realized, constructed or delivered?
14Are the tasks understandable to a typical IT employee?
15Are the tasks an average of 10-14 hours?
16Are tasks estimates a reasonable size (less than 20 hours)?
17Are the tasks a 50/50 estimate?
18Are the tasks assigned appropriate phases?
19Are the start and end dates correct and realistic?

Staff
20Are the team members, roles and responsibilities for the project identified clearly?

Project
21If required, does the project have a charter?
22Is a client sponsor and contact clearly identified?
23Is the project linked with ITS and university strategic goals within the system?
24Does the project have cost and benefits identified and recorded (NPV, ROI, payback period)?
25Does the project have a measurement model in place to track fulfillment of the business case and ROI?
26Does the business case make clear sustainable funding?
27Does the project charter identify change management appropriately, if needed?
28Has the project been rated along the dimensions needed for portfolio analysis within the system?

Average

% 4 or 5

...

Estimators create estimates that are “in the middle” — not too high, not too low. The estimate is correct if the estimator feels there is as much chance of going over the estimate as there is of going under. With the measurement capabilities in SPM, if estimators “sandbag” their estimates by constantly estimating high, then as tasks close out that gap will be easily detected in the V1 metric as well as by analyzing the different estimate versus actual metrics within SPM. This differs from the three - point estimation model (optimistic, most likely, and pessimistic estimates) made in traditional project management methodologies (e.g., the Project Management Institute’s PMBOK). The estimator enters only one estimate that represents an estimate for which the team is just as likely to go over the estimate as it is to come in under the estimate. With small task sizes, the volatility metrics — and the numerous metrics to measure both the re-estimating of tasks and the difference between actual hours spent and estimated hours — we don’t need three points of estimation. This simplifies the estimator’s work.

Ref.: Internal PPMO Employee Handbook