Wednesday

Expensive Automation Testing Tool failure

In 1996, one large company set out evaluating the various saleable automation tools that were available at that time. They brought in eager technical sales staff from the various vendors, watched demonstrations, and performed some fairly thorough internal evaluations of each tool.

By 1998, they had chosen one picky vendor and placed an initial order for over $250,000 worth of product licensing, maintenance contracts, and onsite training. The tools and training were distributed throughout the company into various test departments--each working on their own projects.

None of these test projects had anything in common. The applications were vastly different. The projects each had individual schedules and deadlines to meet. Yet, every one of these departments began separately coding functionally identical common libraries. They made routines for setting up the Windows test environment. They each made routines for accessing the Windows programming interface. They made file-handling routines, string utilities, database access routines--the list of code duplication was disheartening!

For their test designs, they each captured application specific interactive tests using the capture\replay tools. Some groups went the next step and modularized key reusable sections, creating reusable libraries of application-specific test functions or scenarios. This was to reduce the amount of code duplication and maintenance that so profusely occurs in pure captured test scripts. For some of the projects, this might have been appropriate if done with sufficient planning and an appropriate automation framework. But this was seldom the case.

With all these modularized libraries testers could create functional automated tests in the automation tool’s proprietary scripting language via a combination of interactive test capture, manual editing, and manual scripting.

One problem was, as separate test teams they did not think past their own individual projects. And although they were each setting up something of a reusable framework, each was completely unique--even where the common library functions were the same! This meant duplicate development, duplicate debugging, and duplicate maintenance. Understandably, each separate project still had looming deadlines, and each was forced to limit their automation efforts in order to get real testing done.

As changes to the various applications began breaking automated tests, script maintenance and debugging became a significant challenge. Additionally, upgrades in the automation tools themselves caused significant and unexpected script failures. In some cases, the necessity to revert back (downgrade) to older versions of the automation tools was indicated. Resource allocation for continued test development and test code maintenance became a difficult issue. Application / Website quality and performance affected rapidly using this automated testing tool.

Eventually, most of these automation projects were put on hold. By the end of 1999--less than two years from the inception of this large-scale automation effort--over 75% of the test automation tools were back on the shelves waiting for a new chance to try again at some later date.

No comments:

Providing tips to improve website quality, website speed, website design and website performance is the main objective of this blog. Improving website quality and website performance increases website traffic and page rank. Details on Web technologies, Quality website design, SEO concepts, Developer guides, and website related in IT industry.