Software reliability prediction assessment

Certain industry assessments focus entirly on the development process. Unfortunately, that only accounts for about 22% of the factors that effect software reliability.   See the software reliability fact Sheet. Highly reliable software does require a software development process but it also requires other important factors which cannot be compensated for by process.

  • People - experience, location, organization, team structure, team sizes, leadership
  • Technique - ability to execute the project, methods and tools for developing the product
  • Product characteristics and risks - requirements, design, code, test plans
  • Process - ability to tailor the process to meet the needs of the  project, consistency and repeatability of development processes
  • Project and industry risks - industry, market stability, type of software

The software reliability assessment be as detailed as your company needs or wants.  You can choose the level of detail for your assessment based on cost and time constraints. Either way the assessment results include one of seven predicted clusters, a benchmark to others in your industry, the most sensitive factors for your organization, and the practices that aren't resulting in software reliability ROI.

Software reliability assessmentSoftware reliability prediction assessment

The very first step in the assessment is to complete a survey and have it reviewed by an expert. The assessment survey has between 95 and 350 questions depending on the level of detail that you choose and is completed by various subject matter experts such as the software manager, lead software engineer, software engineer, software tester, software QA. The score determines one of seven percentile groups from World Class to distressed as well as the predicted defect density , sensitivity analysis and software reliability predictions. The questions are related to:

  • What are the primary risks related to this product? Evolving system hardware? New environment? Old fragile code? Turnover? Vendors that you can't depend on?  Too many distractions from the field?
  • What's in the artifacts? Pictures or words? Can the requirements be tested? Is design an after the fact activity? What exactly are people testing?  Does anyone consider what the software should NOT do?
  • How is the project managed and executed? Is progress against schedule tracked often enough to allow for mitigation?  Is the project as a whole and each of the individual tasks "starting" on time? Is it getting derailed by previous releases that require field support?  Is verification stalling because software engineers were allowed too much lattitude in testing their own code?  Is testing stalling because the testers waited until the end to review the requirements?
  • How are the teams organized? Where are they located with respect to the rest of engineering? How much domain experience do the team members have?
  • What methods exist for identifying and mitigating schedule risk before it becomes a big problem? Is the wheel being reinvented (code being written that's available commercially)? Are there too many short term contractors who don't understand the product domain?  Are the software engineers and marketing persons restricted from gold plating once the scope is set?  Are big projects decomposed into several smaller ones to avoid schedule delay which inevitably effects reliability?