Sunday, November 16, 2008

HOW TO DEVELOP, MAINTAIN, AND SUPPORT A QUALITY MANAGEMENT AND DEVELOPMENT PROCESS

By James Downs

The task of defining test plans, acceptance criteria, and testing deliverables and processes for any software development effort can face many different and evolving challenges, from identifying applicable processes to maintaining those decisions over time.
Choosing a tool to help support these practices and strategies can not only help alleviate the burden placed on those involved, but it can add efficiency, organization, and a backbone for success.

Before explaining how we started and implemented the quality initiative at my company, let me provide a little background information. Meridian Knowledge Solutions is a leading provider of learning management system (LMS) and learning content management system software. We also provide professional services, courseware, development, and hosting services. We serve 4.5 million users at more than 200 public- and private-sector employers.

Our flagship product, Meridian Global LMS, integrates learning content management, workforce analytics, knowledge management, and competency modeling in one LMS. Meridian Global LMS provides users with access to courseware, documents, data, instructors, and other learners on demand. Any material designed to aid job performance is easily and readily available and completely integrated into a single Web site.

In this article, you will get an overview of the testing methodologies and processes used at Meridian, why we chose Oracle Test Manager for Web Applications to help us manage and support our processes and quality initiatives, and how we use Oracle Test Manager for Web Applications on a daily basis.

MERIDIAN QA: IMPLEMENTING A QUALITY INITIATIVE
As many people are aware, defining a single process from the ground up to support any software lifecycle is an arduous task. Trying to define all processes to join every portion of the lifecycle can feel practically impossible. Fortunately it can be done, and it really is not as tough as you think.
In 2004, we decided to go to the drawing board and redefine the responsibilities and accountability of quality assurance (QA) as it related to our new product (Meridian Global LMS), and the companywide image in general.

Accomplishing this goal meant not only defining new methodologies for the product QA team, but figuring out how to tie these processes together with existing processes from other teams in the lifecycle. Our initial priority was to keep things as simple and streamlined as possible. “Working smarter, not harder” appeared to be the perfect motto for our agenda.

We started with centralized and individual processes based on normal industry standards, as well as our own knowledge of what has worked in real-world examples throughout our many years of collective experience. Portions of our core processes are derived from very basic and normal industry best practices from leading entities, such as the Capability Maturity Model Integration approach by the Software Engineering Institute. Lastly, we wanted to make sure any shortcomings we experienced in the past would not be repeated in the new processes.

We developed simple and rational best practices for a change control process, a test strategy, documentation practices, readiness review procedures, as well as supporting templates and
guidelines for QA deliverables. None of these deviate very far from what other companies and organizations implement when they too set up a QA and lifecycle program. We believe our advantage is in our commitment to these QA initiatives and how well they work with development, requirements, and management processes.

So what are all these fancy processes and how can you determine what to create? Start by defining basic required processes, and supplement them with optional processes that add more value when needed. Such deliverables are formal detailed test plans, final test reports, and cumulative testing metrics, among others. Some processes that can be very advantageous are test readiness reviews, release readiness reviews, and a formalized change control board (CCB). All of these, among others, help make up our Meridian ideology and methodology.

So we are all set, right? Now what? The big question and concern then became, How can we maintain and support all of this while trying to keep up with constant code changes, requirement updates, and product scope tangents?

The answer was Oracle Test Manager for Web Applications: a tool that has helped us seamlessly bring all these pieces together, while collaboratively enabling communication through a common portal.

ORACLE TEST MANAGER FOR WEB APPLCIATIONS ARCHITECTURE: HOW IT SUPPORTS OUR INITIATIVES

The biggest advantage of Oracle Test Manager for Web Applications is its simplicity: all fundamental areas of the software lifecycle are available in a simple and intuitive interface. I personally have used numerous other applications for requirements, testing, and defect tracking that required too much time cross-pollinating information and trying to sync the independent applications. These applications were oftentimes much more expensive to implement and maintain than Oracle Test Manager for Web Applications was, and still are.

One of the most time-consuming, yet accountable, processes in QA is the ability to create traceability between requirements, tests, and defects. This is probably one of the most important traits Oracle Test Manager for Web Applications provides to us at Meridian. By allowing users to associate requirements to tests (see figure 1) and tests to issues (see figure 2), and thus automatically associating issues to requirements (see figure 3), Oracle Test Manager for Web Applications provides traceability and mapping that certainly aids in our CCB process, issue resolution, and testing preparation and execution.










Figure 1


Figure 2











Figure 3
Let us examine the Oracle Test Manager for Web Applications architecture and its independent Requirements, Tests, and Issues modules. The basic Oracle Test Manager for Web Applications setup using the dedicated license server and Microsoft SQL Server back end was an easy choice and convenient setup for us. Using Microsoft SQL Server provides a powerful and simple solution for database maintenance and backup support. Additionally, the dedicated Oracle Test Manager for Web Applications server does not have to be overly robust for basic operation, unlike other lifecycle solutions.

The Oracle Test Manager for Web Applications Requirements module provides a standardized platform for design and functional requirements creation and maintenance. Its out-of-the-box fields and options provide adequate support for even the most complex application. However, the ability to create custom fields in the Oracle Test Manager for Web Applications Administrator provides an even more powerful platform for requirements flexibility and management, and enables customization of the application to match your defined processes. The additional ability to attach files for such things as design images and functionality workflows increases productivity by allowing developers and testers the information needed to write code and tests correctly the first time. Also, Oracle Test Manager for Web Applications maintains all previously saved versions of a requirement and provides the ability to save comments with each saved version (through custom fields).

The Issues module shares all the same productivity, efficiency, and flexibility standards as the Requirements module. But it also provides us at Meridian with the platform we need to effectively and seamlessly manage our change control process. We take advantage of the custom field functionality to add any additional fields and options we need to manage ownership and expectations of defect and enhancement changes (see figure 4). The Issues module does not mange this process for us, but without it our change control process would not be anywhere close to the efficient and seamless process we use today. Additionally, the information contained in the Issues module gives us great flexibility in managing release readiness, as well as metrics reporting for each software release.


Figure 4

Last, but certainly not least, is the Tests module in Oracle Test Manager for Web Applications. Obviously, from a natural QA perspective, this is the focal point of the application. The ability to structure the tests by means of folders and test groups is integral for proper test management and maintenance. But the ability to manage individual manual, automated (from Oracle Functional Testing for Web Applications), and third-party tests is the core advantage to this module. For anyone converting existing QA material to a managed system like Oracle Test Manager for Web Applications, the manual test process is second to none. Simply and easily, existing tests in such applications as Microsoft Word, Microsoft Excel, or the like can be ported to the Oracle Test Manager for Web Applications test structure. Instantly, tests can be managed in a central location and reused countless times (see figure 5). What better way to make a test update in one location and seamlessly have this change applied to all instances of that test’s use?



Figure 5
Similarly for more-advanced QA departments, the ability to maintain automated tests from Oracle Functional Testing for Web Applications also shares the same core advantages as manual tests. In the end, the Tests module alone can be enough of an advantage for a QA department to use to outweigh the costs of implementation and maintenance. It is such a good tool for us that all of our QA personnel can easily spend an entire working day logged into Oracle Test Manager for Web Applications conducting all of their daily and long term tasks.

Collectively, as stated previously, these three core modules enable the association between them to completely cover traceability that is vital to so many organizations. Some could say the traceability coverage, which is practically seamless by nature in Oracle Test Manager for Web Applications, saves our team days, if not weeks, on every release—making sure our functionality is covered from A to Z.

MANUAL VERSUS AUTOMATED TESTING
The same age-old question exists for us as all other QA organizations: Do we use automated testing or manual testing? No surprise here, we have to make the same decisions as any other managed QA department does. Is it cost effective to automate and maintain this type of testing? If so, what volume of tests do we automate? How often do we run these tests? How do we properly implement a sound automation practice? The questions are nearly endless, and an entire white paper could be devoted (and undoubtedly has) to this subject.

One of the biggest advantages of automated testing is repetitive execution of functional test scenarios against a consistent and expected interface. Basically, automated regression testing is probably the safest route to take. Because regression testing is most often executed against a relatively unchanged function(s) it is a safer bet to automate, thus requiring less maintenance.
We spend so much time adding new features and changing outdated functionality, so we have had little chance to automate portions of our application. This is in no way a bad thing, especially since we have identified this from the get-go, and not talked ourselves into a false hope that automation can in some way “save us” from the “perils” of manual testing. In fact, manual testing has been extremely effective for us. Our features change often, the application is so customizable, and our user interface is always being tweaked that we have to be very selective in what we automate so we do not create unnecessary maintenance.

That said, we have been designing a formal process to implement automation at levels that would give us the greatest benefit, because our QA processes were introduced years ago. When the time is right we will slowly implement smoke testing at the basic functional level so it is easier to maintain, yet buy us the greatest benefit. Because we apply new builds of code to our QA environments so often, smoke testing will really help us identify low-level defects at the root level on a more-efficient basis, rather than waiting for a tester to manually execute the process.
Just like all our processes, automated testing is a great support tool, but it does not dictate our processes and end results. Using Oracle Test Manager for Web Applications to manage the execution of the automated tests, as well as scheduling out the process runs, will fall right into place with the other modules we use in Oracle Test Manager for Web Applications.

REPORTING AND STAKEHOLDER BUY-IN
The reporting capability has vastly improved in recent releases of Oracle Test Manager for Web Applications. Of particular interest is the dashboard style setup of the reports. In the past, we have used our own collaborative dashboard (in the form of weekly/monthly metrics) to present to management and owners to show such things as product status, defect resolution rate, and test/issue ownership. These are always vital for buy in and progress explanation; however, when they need to be presented frequently, they can really take a chunk of time to create. The dashboard reporting feature can greatly increase the time needed to put these numbers together, especially on a moment’s notice.

Also a somewhat new, but empowering, feature is the ability to create new reports for you and publish them for others to use. Different members of our product team employ their own private reports to help track internal progress, but also publish some instances to the team so information can be shared. This can be a huge advantage to check on status and other information instantly—without the need to call, e-mail, or meet with team members.
Standard out-of-the-box reports should not be overlooked either. We use these reports in varying degrees to report such things as test progress/status for release readiness, issue resolution/results to support our final test reports, as well as requirements to test traceability as
alluded to earlier. The Reporting module is another great advantage provided by the single, integrated solution offered by Oracle Test Manager for Web Applications.

QUICK TIPS AND TRICKS
With all applications, there are always certain shortcuts and tidbits that can help manage and maintain efficiency as well as contribute to knowledge transfer within the team. Here are some of those tips and tricks I have found over the years using Oracle Test Manager for Web Applications based on how we use it in our organization.

One of the biggest advantages is to reuse custom fields within the three core modules as often as possible. For example, we use the Version field across all three modules. Because we apply new versions of code to our testing environment at least once daily, the ability to log in and update the current version in one central location saves us time and ensures we don’t make any typos or mistakes. Naturally, correct versioning helps in reporting and metrics accuracy.

A very good tool recently included in Oracle Test Manager for Web Applications is the Screen Capture Utility. This is a wonderful application that can really speed up the time to include screenshots of bugs and GUI captures. Using the Screen Capture Utility can simplify the process of entering issues or the need to include GUI captures for requirements or tests. It is much more time efficient than using the old method of using the print screen (PrtScn) button, and some general tool such as Microsoft Paint to paste, crop, and save the picture.

Another interesting tidbit we have found is not to abandon the use of the Oracle Test Manager for Web Applications desktop interface. With more and more improvements to the Oracle Test Manager for Web Applications Web interface, it can be enticing to move to the more modern interface full time. However, there are a few key advantages to continuing to use the desktop interface. One of these is the use of drag-and-drop functionality. When working on large volumes of functional tests and test groups, such as we do, the ability to organize these tests assets by using drag and drop is a huge time-saver, rather than using the move left/right/up/down buttons in the Web interface. Similarly, the same advantage exists for requirements as well.
A final, but important, trick is to make sure the unique IDs are turned on. In the Tools > Options menu, be sure to select the check box to display unique IDs, rather than using the default index sorting (see figure 6). As you reorganize requirements and tests, the default indexing can change, which can be a real nuisance if you monitor traceability closely. Why? Because the indexing value will change as the entity is moved. Using unique IDs will avoid this potential problem. The unique ID will follow the Oracle Test Manager for Web Applications entity and maintain identification through its movement. Also under Options, you can control the number of records that will appear with a single node.



Figure 6
Another note about the unique IDs is to use different database instances per test manager project when copying a project (one project per database). Why? If you copy a project within the same database, the unique IDs will jump, so to speak. Because the IDs remain unique, what used to be TEST100 in Project A, will now be TEST500 in the copy of Project A. This obviously can throw a kink in maintaining consistent traceability. To avoid this, copy the project to a new and clean database instance where the ID mapping will remain the same. Copying within the same database can certainly have its advantages in certain situations, but we have found new database instances per test manager project as the best way to maintain our Oracle Test Manager for Web Applications archiving process and overall traceability. Following a major release, we copy the test manager project to a new database. The copy becomes the archive for that release, and we continue work in the “main” project.

CONCLUSION
In the end, Oracle Test Manager for Web Applications offers Meridian a relatively low-cost solution (especially when compared to similar vendors) that supports almost all aspects of our product development lifecycle. Because it is an extremely low maintenance and easily manageable application, we can spend more time relying on its benefits, and less time worrying about its dependability. It is important to note that we chose Oracle Test Manager for Web Applications as a supportive solution to help drive and maintain our processes and initiatives after they had been identified and defined. I believe that if you were to reverse this and choose a tool first, an organization can too easily become dependent on that solution, and their processes can become too restrictive and inflexible as their product and organization change over time.
In summary, Oracle Test Manager for Web Applications is a tool that assists us with decision-making, productivity, and knowledge throughput in a centralized interface. There is no doubt our efficiency would wane greatly without the ability for instant traceability and easy reuse and optimization of tests. Oracle Test Manager for Web Applications is a viable and credible solution that any organization should strongly consider to help support their lifecycle efforts.

No comments:

Post a Comment