Tips for a Ensuring a Sucessfull Software Project

(Content from softwarePlanner.com)

  1. Customer Requirements
  2. Managing Risk
  3. Providing Weekly Status Reports
  4. Creating Solid Detailed Designs
  5. Creating Solid Test Designs
  6. Releasing Software for Customer Testing
  7. Conducting Project Post Mortems
  8. Minimizing Software Defects via Inspections

Tips for collecting Customer Requirements

Often customer requirements are stated vaguely and other times requirements are not documented at all.  When this happens, customers view the requirements broadly, while developers view the requirements very narrowly. 

For example, a vague customer requirement may be to create a logon page for your application.  The developer may be thinking that the end user will enter their email address and password, have this information authenticated, then allow the end user to log on to the system.   The customer, on the other hand, may be thinking:

As you can see, the effort for creating a simple logon page (entering of email address and password and authenticating it) is much less than the effort for creating the bells and whistles the client envisions.  Unless the exact requirements are documented and agreed upon, the project can slip due to the additional effort the client envisioned.

Below are the keys to successfully collecting customer requirements:

  1. Make the requirement very specific - Be sure to include a narrative that explains the requirement in detail.  Be specific about how each feature will work. 
  2. Create a Prototype - Create a prototype for the feature to ensure that your developers and your customer agree on the features and the presentation of the feature.  You can quickly create a prototype using Front Page or any other HTML editor.  If there are buttons on the page, explain in detail what will happen as the buttons are clicked.  Attach a "screen shot" of your feature to your documentation of the customer requirement.
  3. Specify "outside requirements" for the Feature - In the example of the Logon page, you may have "outside requirements".  One example is security requirements (passwords must be changed every X number of days, must be a mix of alphanumeric, numeric, etc.).   Another example is data conversion requirements (you may need to write conversion scripts that convert all users from your Windows 2000 domain to users for your system).  Another example is performance and response time requirements (when a person clicks Logon, they must be logged in within 5 seconds).  These are few examples, you must evaluate all "outside requirements".
  4. Document each Customer Requirement - It is wise to create a template that you use to define all your customer requirements.  The template will jog your memory to ask specific questions as you define the requirements.  As you collect your requirements, document them based on the template.  This allows you to print your assumptions and discuss them with your client.  If you would like a template for customer requirements, go to http://www.pragmaticsw.com/Pragmatic/Templates/FunctionalSpec.rtf.
  5. Get Developer and Customer Signoff - Have your developer and customer sign the customer requirements document so that you can later refer back to it, showing that both parties agree to the requirement.

Tips for Managing Risk in Software Projects

To deliver software on-time and on-budget, successful project managers understand that software development is complex, and that unexpected things will happen during the project life cycle.  There are 2 types of risks that may affect your project during it's duration:

  • Risks you know about - There are many risks that you know about, that you can mitigate.  For example, let's assume that you have assembled a team to work on the project and one of the stellar team members has already scheduled a 3 week vacation just before testing is scheduled, which you agreed to allow.  The successful project manager will identify this risk and provide some contingency plans to control the risk.

  • Risks you don't know about - There are also risks that you don't know about, so a general risk assessment must be done to build time into your schedule for these types of risks.  For example, your development server may crash 2 weeks into development and it may take you 3 days to get it up and running again.

The key to managing risks is to build contingency plans for risk and to build enough time into your project schedule to mitigate risks that you do not know about.  Below are a list of the 5 most common scheduling risks in a software development project:

  1. Scope and feature creep - Here is an example: Let's say the client agrees to a requirement for a Logon page.  The requirement specifies that the client will enter their userid/password, it will be validated and will allow entry upon successful validation.  Simple enough.  Then in a meeting just before coding is commencing, the client says to your project manager "I was working with another system last week and they send the client a report each day that shows how many people log in each day.  Since you have that information already anyway, I'm sure it will only take a couple of minutes to automate a report for me that does this."  Although this sounds simple to the client, it requires many different things to happen.  First, the project manager has to amend the requirement document.  Then the programmer has understand the new requirement.  The testing team must build test scenarios for this.  The documentation team must now include this report in the documentation.  The user acceptance team must plan to test this.  So as you can see, a simple request can add days of additional project time, increasing risk. 

  2. Gold Plating - Similar to scope and feature creep, programmers can also incur risk by making the feature more robust than is necessary.  For example, the specification for the Logon page contained a screen shot that showed very few graphics, it was just a simple logon process.  However, the programmer decides that it would be really cool to add a FLASH based movie on the page that fades in the names of all the programmers and a documentary on security.  This new movie (while cool in the programmer's eyes), takes 4 hours of additional work, put their follow-on tasks are n jeopardy because they are now behind schedule.

  3. Substandard Quality - The opposite of Gold Plating is substandard quality.  In the gold plating example, the programmer got behind schedule and desperately needed to catch up.  To catch up, the programmer decided to quickly code the next feature and not spend the time testing the feature as they should have.  Once the feature went to the testing team, a lot of bugs were found, causing the testing / fix cycle to extend far beyond what was originally expected.

  4. Unrealistic Project Schedules - Many new team members fall into this trap.  Project members (project managers, developers, testers, etc), all get pressure from customers and management to complete things in a certain time frame, within a certain budget.  When the timeframes are unrealistic based on the feature set dictated,  some unseasoned team members will bow to the pressure and create their estimates based on what they think their managers want to hear, knowing that the estimates are not feasible.  They would rather delay the pain until later, when the schedule spirals out of control.

  5. Poor Designs - Many developers and architects rush the design stage in favor of getting the design behind them so the "real" work can begin.  A solid design can save hundreds of programming hours.  A design that is reusable, allows changes to made quickly and lessens testing.  So the design stage should not be rushed.

Tips for Providing Weekly Status Reports in Software Projects

To deliver software on-time and on-budget, successful project managers communicate regularly with all members of the team (management, leaders, testers, programmers, clients, etc).   Creating weekly status reports are great way to ensure that everyone is on the same page and also benefits the team by stepping back and analyzing how the project is progressing.

The key to great communication is to collaborate with team members each day and to create weekly status reports to summarize your progress and to identify issues that need resolution. Below are a list of tips for making weekly status reports meaningful:

  1. Use the Red/Yellow/Green Metaphor - Status reports are designed to show accomplishments and to identify areas that need attention.  Using a Red/Yellow/Green metaphor is a great way to separate those areas of the status report:

    Red - List critical issues that are keeping you from delivering on schedule and on budget.  These items need management help in resolving.  Example: You can not begin testing because management has not approved the purchase of your test server.
    Yellow - List issues that management should be aware of but do not keep you from delivering on schedule and under budget.  These items may not need management help in resolving. Example: Your testing team is running 2 days behind schedule, but the testing team has agreed to work the weekend to catch up.
    Green - List accomplishments or progress made on deliverables for the week.
    Example: Provide a bulleted list of deliverables that should have been achieved this week, along with their status. 
     

  2. Identify Week's Priorities - Identify next weeks tasks and priorities so that everyone knows what things are expected of them in the upcoming week.  Different teams also use that to ensure that any tasks that are dependent on them are all in alignment as to be ready for those deliverables to be worked on.
     

  3. Provide Metrics - Providing metrics allow your team to step back and see things in the bigger picture.  Typical metrics should include defect metrics (like number of defects by status/severity/priority, etc) and test case metrics (number of test cases run/passed/failed, etc).  It could also include metrics regarding deliverables and your risk management efforts.
     

  4. Discussion Forums - Create a discussion forum for your team members.  Post the weekly status reports in the discussion forum so that they are automatically distributed via email and a history is kept of each weekly status.
     

  5. Template - We have created a template we use for the weekly status report.  To download a copy click here.

Tip for Creating Solid Detailed Designs

  1. Document your Architectural Roadmap - If your company has not done so yet, document your architectural roadmap for delivering solutions.  If you wish to see a template for doing this, go to http://www.pragmaticsw.com/Pragmatic/Templates/ArchitectureOverview.rtf.
  2. Create a Prototype - If a prototype was not created during the customer requirements phase, create a prototype for the feature to ensure that your developers and your project manager agree that the technical design meets the customer requirement.  You can quickly create a prototype using Front Page or any other HTML editor.  If there are buttons on the page, explain in detail what will happen (from a technical perspective) as the buttons are clicked. 
  3. Specify the details of each function -For example, if you are creating a detailed design for a logon screen, you should have a section that describes exactly what will happen when the Logon button is clicked.  It may describe that you will call a business object to validate the userid and password and if it is incorrect, an error is raised (specify the exact error message). It may further specify what objects will be used (like common objects to validate email addresses, etc).
  4. Specify "Other Design Considerations" - When implementing solutions based on customer requirements, you should specify if there are design considerations outside of the norm.  For example, you may specify that the software under the current design will work with IE and Netscape versions 6 and higher.  You may also specify that no localization will be done, the user interface will support only the English language.  These are examples of "other design considerations".
  5. Break the Design into Tasks and Provide Estimates - As the developer specifies the design for each feature, a list of tasks that must be performed to complete each technical function must be identified, along with an estimate of how long (in hours) each task will take to complete.
  6. Have a Team Design Review - Once the detail design is complete, have the technical team review the design (and estimates) to ensure that the developer covered all the bases.  Many times these design reviews bring up ideas that allow you to reduce the effort with a more elegant approach.  This approach also allows "other design considerations" to surface and be discussed. 
  7. Get Developer and Project Manager Signoff - Have your developer and project manager sign the detail design Tips for Creating Solid Detailed Designs

Tips for Creating Solid Test Designs

To deliver software on-time and on-budget, project managers must be able to understand the testing effort to adequately estimate the project.   Once solid customer requirements have been created (see our prior newsletter for tips on collecting solid requirements) and a solid detail design has been done (see last month's newsletter for tips on creating solid detail designs), the test team leader should create a test plan that explains the testing strategy.  The most reliable way to do this is to create a "Test Design" document. 

The test design document allows your testing team to thoroughly think through the testing approach, and to determine the effort involved in providing adequate test coverage for each functional specification item.   Below are the keys to successfully creating test designs (see next section for a template to get you started):

  1. Specify background information - Gives the tester a brief background of the project so that they can understand more about why the project is being undertaken.  This will help as you bring new testers to the team during the testing phase, allowing you to communicate a consistent message and decreasing the amount of time it takes to get the tester started. 
  2. Specify Code Freeze Date - Many projects have cost overruns because of "gold plating".  Gold plating is when project managers allow customers and programmers to continue adding bells and whistles to the software that are outside of the project scope.  At first they seem innocent, under the guise of making the product more robust.  Many seem to take only a few hours, but it has a ripple effect on the project because it affects your test plans, documentation, release requirements, support requirements, and many other things.  The best way to prevent this is for the project manager and testing team to hold the customers and programmers to the specification and enforce a date in which no new programming is done.  That date is the "code freeze date" and after that point, only defect repair is done.  Without a code freeze date, the testing team is testing a product that is in constant change.
  3. Create a Test Case Matrix - This is a traceability matrix that ensures that you have adequate test coverage of the functional specifications.  To create the matrix, simply list each functional specification and then list all test cases you have created for each functional specification item.  Once this is created, you can easily determine if you have enough test coverage for each functional specification item.  Many times you will discover that you missed some functional specification items or did not have enough test cases for a specific area.
  4. Specify Features Excluded from Testing - Just as important as the items you will test, it is wise to list what features will not be tested as to ensure that everyone agrees with this.  For example, you may be planning to test the software with Internet Explorer 6.0 and Netscape 6.0.  If this is not specified, your project manager may expect you to test with all versions of IE and Netscape, increasing your testing effort significantly, as this will cause you to build additional test machines with those versions.  Having this specified in your test plan will reduce the chance of these cost and effort overruns.
  5. Specify Release Criteria - The "Release Criteria" describes the procedures for allowing the project to enter the production phase.  It specifies:
    A) Smoke test criteria -  The "smoke test" is a series of standard test cases that your testing team runs when a new build is created, used to ensure that the release is solid enough to begin testing.  If any of those test cases fail, the build is sent back to the programmers for repair before major testing begins. 
    B) User acceptance test criteria - This specifies the criteria for moving the code to an area in which your users can test it.  For example, you may specify that all severity 1 or 2 defects must be fixed before moving it to user acceptance testing.
    C) Release to Production Criteria - This specifies your criteria for ensuring that the code is production ready.
  6. Break the Test Design into Tasks and Provide Estimates - As the test leader specifies the test design, a list of tasks that must be performed to complete each test function must be identified, along with an estimate of how long (in hours) each task will take to complete.
  7. Have a Team Design Review - Once the test design is complete, have the testing team review the design (and estimates) to ensure that the tester covered all the bases.  Many times these design reviews bring up ideas that allow you to reduce the effort and increase testing coverage with minimal effort. 
  8. Get Test Lead and Project Manager Signoff - Have your test lead and project manager sign the test design document so that you can later refer back to it, showing that both parties agree to the approach.

Tips for Releasing Software for Customer Testing

Once your testing team has thoroughly tested your software, it is time for the customer to test it before moving the software into production.  This is referred to as the "User Acceptance Test" phase of the software lifecycle.  This is an important phase of the software lifecycle, as it is the first opportunity for the end clients to work with your software. A very organized User Acceptance Test can bear many rewards:

The key to a successful User Acceptance Test phase is to have a very organized plan for conducting the testing.  Below is a list of 5 Tips for conducting successful User Acceptance Tests:

  1. Set Expectations - Educate the customer, letting them know that the goal of User Acceptance Testing is to find defects that will be prevented once the software is implemented.  So finding defects is a good thing and is encouraged.

  2. Identify Defect Resolution Procedures - As defects are found, you must have a documented strategy for allowing the client to report defects and to review the status of each defect.  Using products like Defect Tracker (www.DefectTracker.com) or Software Planner (www.SoftwarePlanner.com) allow customers to submit support tickets on-line and check the status of the tickets.

  3. Drop Schedule - As defects are fixed, you should have a "Drop Schedule" for new releases.  For example,  during the User Acceptance Test phase, you may release a new copy of the software each Wednesday for your customers to test.  This allows the customer to rely on a specific time table for new releases so that they can re-test defects that were previously fixed.

  4. Document Current Defects and Testing Statistics - Before beginning User Acceptance Testing, you may have some low priority defects that have not been fixed.  Let the customer know what those defects are so that if they encounter them, they will not report them again.  Another good approach is to supply the customer with statistics that show how many test cases were run during your testing and how many defects came out of that effort.  Each week, do weekly status reports for your customer, showing how many defects have been found by their efforts and how many defects are outstanding.

  5. Create a User Acceptance Testing Document - Prior to beginning User Acceptance Testing, create a "User Acceptance Testing Release document."  This document explains the plan for User Acceptance Testing, and provides a conduit for a successful testing phase.  We have created a template that you can use for the document, download it by clicking here.

Tips for Conducting Project Post Mortems

Very few projects go as planned.  Many projects encounter problems that must be corrected and a few lucky projects go smoother than planned.  Regardless of how successful or disastrous a project is, it is important to review the project in detail once the project is over.  This allows your team to figure out what things were done well and to document the things that need improvement.  It also aids in building a knowledge base that teams coming behind you can review to ensure they get the most out of their upcoming projects.

The key to a successful projects is to learn from past mistakes.  Below is a list of 5 Tips for conducting successful Post Mortem reviews:

  1. Plan Your Post Mortem Review - Upon completion of a project, the Project Manager should conduct a "Post Mortem" review.  This is where the Project Manager invites all the major players of the team (Analysts, Lead Programmers, Quality Assurance Leaders, Production Support Leaders, etc) to a meeting to review the successes and failures of the project.

  2. Require Team Participation - Ask the attendees to bring a list of 2 items that were done well during the project and 2 things that could be improved upon.  

  3. Hold the Post Mortem Review Meeting - Go around the table and have each person to discuss the 4 items they brought to the meeting.  Keep track of how many duplicate items you get from each team member.  At the end of the round table discussion of items, you should have a count of the most popular items that were done well and the most agreed upon items that need improvement.  Discuss the top 10 success items and the top 10 items that need improvement. 

  4. List Items Done Well and Things Needing Improvement -Upon listing of the 10 success and improvement items, discuss specific things that can be done to avoid the items that need improvement upon the next release.  If some items need more investigation, assign specific individuals to finding solutions.

  5. Create a Post Mortem Report - The best way to keep this information organized is to create a "Post Mortem" report, where you document your findings.  Send the Post Mortem report to all team members. Before team members embark on their next project, make sure they review the Post Mortem report from the prior project to gain insight from the prior project.  We have created a template that you can use for the document, download it by clicking here.

Tips for Minimizing Software Defects via Inspections

Many of us have experienced projects that drag on much longer than expected and cost more than planned.  Most times, this is caused either from inadequate planning (requirement collection and design) or from an inordinate number of defects found during the testing cycle. 

A major ingredient to reducing development life cycle time is to eliminate defects before they happen. By reducing the number of defects that are found during your quality assurance testing cycle, your team can greatly reduce the time it takes to implement your software project.

The key to reducing software defects is to hold regular inspections that find problems before they occur.  Below is a list of 5 Tips for Reducing Software Defects:

  1. Conduct Requirement Walkthroughs - The best time to stop defects is before coding begins.  As the project manager or requirements manager begins collecting the requirements for the software, they should hold meetings with two or more developers to ensure that the requirements are not missing information or are not flawed from a technical perspective.  These meetings can bring to surface easier ways to accomplish the requirement and can save countless hours in development if done properly.  As a rule of thumb, the requirements should be fully reviewed by the developers before the requirements are signed off.

  2. Conduct Peer Code Reviews - Once coding begins, each programmer should be encouraged to conduct weekly code reviews with their peers.  The meeting is relatively informal, where the programmer distributes source code listings to a couple of his/her peers.  The peers should inspect the code for logic errors, reusability and conformance to requirements.  This process should take no more than an hour and if done properly, will prevent many defects that could arise later in testing. 

  3. Conduct Formal Code Reviews - Every few weeks (or before a minor release), the chief architect or technical team leader should do a formal inspection of their team's code.  This review is a little more formal, where the leader reviews the source code listings for logic errors, reusability, adherence to requirements, integration with other areas of the system, and documentation.  Using a checklist will ensure that all areas of the code are inspected.  This process should take no more than a couple of hours for each programmer and should provide specific feedback and ideas for making the code work per the design.

  4. Document the Results - As inspections are held, someone (referred to as a scribe) should attend the meetings and make detailed notes about each item that is found.  Once the meeting is over, the scribe will send the notes to each team member, ensuring that all items are addressed. The scribe can be one of the other programmers, an administrative assistant, or anyone on the team.  The defects found should be logged using your defect tracking system and should note what phase of the life cycle the defect was found.

  5. Collect Metrics - Collect statistics that show how many defects (along with severity and priority) are found in the different stages of the life cycle.  The statistics will normally show over time that when more defects are resolved earlier in the life cycle, the length of the project decreases and the quality increases.