December 22, 2010

How can World Wide Web sites be tested?

Web sites are essentially client-server applications - with web servers and 'browser' clients. Consideration should be given to the interactions between html pages, TCP/IP communications, Internet connections, firewalls, applications that run in web pages (such as applets, javascript, plug-in applications), and applications that run on the server side (such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.).

Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols.

The end result is that testing for web sites can become a major ongoing effort.

Other considerations might include:

  • What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times).
  • What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)?
  • Who is the target audience? What kind of browsers will they be using? What kinds of connection speeds will them by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)?
  • What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)?
  • Will down time for server and content maintenance/upgrades be allowed? how much?
  • What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it expected to do? How can it be tested?
  • How reliable are the site's Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing?
  • What processes will be required to manage updates to the web site's content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?
  • Which HTML specification will be adhered to? How strictly? What variations will be allowed for targeted browsers?
  • Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site??
  • How will internal and external links be validated and updated? how often?
  • Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, dial-up connection variabilities, and real-world internet 'traffic congestion' problems to be accounted for in testing?
  • How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing?
  • How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested?
  • Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger, provide internal links within the page.
  • The page layouts and design elements should be consistent throughout a site, so that it's clear to the user that they're still within a site.
  • Pages should be as browser-independent as possible, or pages should be provided or generated based on the browser-type.
  • All pages should have links external to the page; there should be no dead-end pages.
  • The page owner, revision date, and a link to a contact person or organization should be included on each page.

May 20, 2010

Graphical User Interface (GUI) testing for website

The testing approach for a website will be entirely different when compared to the testing approach for any other applications. There are various types of testing and they are confined with different solutions for different platforms.

Graphical User Interface (GUI) is the keys in all the software applications. They are more or less like a language by which communication of the data is done with a software application.


There are some common techniques of GUI testing.


GUI Verification

Information is nothing but Processed Data, which are provided by a Software Application. GUIs are the one, through which data is fed into the software for processing. Each and every process is confined with functionality. While testing the GUIs, the functionalities have to be mapped with the GUIs that are appearing on the screen. Justification of each and every GUI’s presence is needed very badly for achieving that functionality in our application. Checks have to be done whether the Data that is captured, is useful in some respect for the Quality Processes. If they are not useful then there is no need to capture that data and the GUI have to be removed from the screen in which it was appearing.

Field Level Validation
The data that is fed into the application should be able to process. Ensure that the data that is input into the system is valid for processing. At no point of time the system should behave awkwardly when an invalid data is fed. The system should not accept any data that is not valid. Tests have to be done to ensure this, at the data level, at the GUI level.

Global Level Validation
GUIs will be having some dependency upon the other GUIs. In that case, a combination of the values of that GUIs have to be validated. Tests should be designed so that, the application takes in data that makes sense.

Consistency of GUIs
Consistency of the GUIs, their look and feel matters a lot. These have to be checked thoroughly by having a mapping with the Style Sheet for QMS.

Testing Standard Controls
Navigation through the product is one of the key success factors. Checks have to be done to ensure that Various Controls have to be there to make the navigation easy. Checks should ensure that controls are there to cancel each and every action that is done with the system, provided

January 7, 2010

Software Testing Principles

Software testing is more difficult because of the vast array of programming languages, operating systems, and hardware platforms that have evolve. The most important considerations in software testing are issues of psychology; we can identify a set of vital testing principles or guidelines. Most of these principles may seem obvious, yet they are all too often overlooked.

The principles are:

  • Principle 1: A necessary part of a test case is a definition of the expected output or result.
  • Principle 2: A programmer should avoid attempting to test his or her own program.
  • Principle 3: Programming department should not test its own programs.
  • Principle 4: Thoroughly inspect the results of each test.
  • Principle 5: Test cases must be written for input conditions that are invalid and unexpected, as well as for those that are valid and expected.
  • Principle 6: Examining a program to see if it does not do what it is supposed to do is only half the battle; the other half is seeing whether the program does what it is not supposed to do.
  • Principle 7: Avoid throwaway test cases unless the program is truly a throwaway program.
  • Principle 8: Do not plan a testing effort under the tacit assumption that no errors will be found.
  • Principle 9: The probability of the existence of more errors in a section of a program is proportional to the number of errors already found in that section.
  • Principle 10: Testing is an extremely creative and intellectually challenging task.

Principle 1: A necessary part of a test case is a definition of the expected output or result.

This obvious principle is one of the most frequent mistakes in program testing. Again, it is something that is based on human psychology. If the expected result of a test case has not been predefined, chances are that a plausible, but erroneous, result will be interpreted as a correct result because of the phenomenon of “the eye seeing what it wants to see.”

Therefore, a test case must consist of two components:

  • A description of the input data to the program.
  • A precise description of the correct output of the program for that set of input data.


Principle 2: A programmer should avoid attempting to test his or her own program.

Any writer knows—or should know—that it’s a bad idea to attempt to edit or proofread his or her own work. You know what the piece is supposed to say and may not recognize when it says otherwise. And you really don’t want to find errors in your own work. The same applies to software authors.

A programmer has constructively designed and coded a program; it is extremely difficult to suddenly change perspective to look at the program with a destructive eye. Most programmers cannot effectively test their own programs because they cannot bring themselves to shift mental gears to attempt to expose errors. In addition, a programmer may subconsciously avoid finding errors for fear of retribution from peers or from a supervisor, a client, or the owner of the program or system being developed.

The program may contain errors due to the programmer’s misunderstanding of the problem statement or specification. If this is the case, it is likely that the programmer will carry the same misunderstanding into tests of his or her own program. This does not mean that it is impossible for a programmer to test his or her own program.

Rather, it implies that testing is more effective and successful if someone else that means Test Engineer does it.


Principle 3: Programming department should not test its own programs.

The argument here is similar to the previous argument.

In most environments, a programming department or a project manager is largely measured on the ability to produce a program by a given date and for a certain cost. One reason for this is that it is easy to measure time and cost objectives, but it is extremely difficult to quantify the reliability of a program. Therefore, it is difficult for a programming department to be objective in testing its own programs, because the testing process, if approached with the proper definition, may be viewed as decreasing the probability of meeting the schedule and the cost objectives.

Again, this does not say that it is impossible for a programming department to find some of its errors, because departments do accomplish this with some degree of success.

Rather, it implies that it is more economical for testing to be performed by an Independent Testing department.


Principle 4: Thoroughly inspect the results of each test.

This is probably the most obvious principle, but again it is something that is often overlooked. We’ve seen numerous experiments that show many subjects failed to detect certain errors, even when symptoms of those errors were clearly observable on the output listings. Put another way, errors that are found on later tests are often missed in the results from earlier tests.


Principle 5: Test cases must be written for input conditions that are invalid and unexpected, as well as for those that are valid and expected.

There is a natural tendency when testing a program to concentrate on the valid and expected input conditions, at the neglect of the invalid and unexpected conditions. Also, many errors that are suddenly discovered in production programs turn up when the program is used in some new or unexpected way. Therefore, test cases representing unexpected and invalid input conditions seem to have a higher error-detection yield than do test cases for valid input conditions.


Principle 6: Examining a program to see if it does not do what it is supposed to do is only half the battle; the other half is seeing whether the program does what it is not supposed to do.

This is a corollary to the previous principle. Programs must be examined for unwanted side effects. For instance, a payroll program that produces the correct paychecks is still an erroneous program if it also produces extra checks for nonexistent employees or if it overwrites the first record of the personnel file.


Principle 7: Avoid throwaway test cases unless the program is truly a throwaway program.

This problem is seen most often with interactive systems to test programs. A common practice is to sit at a terminal and invent test cases on the fly, and then send these test cases through the program. The major problem is that test cases represent a valuable investment that, in this environment, disappears after the testing has been completed. Whenever the program has to be tested again (for example, after correcting an error or making an improvement), the test cases must be reinvented. More often than not, since this reinvention requires a considerable amount of work, people tend to avoid it.

Therefore, the retest of the program is rarely as rigorous as the original test, if the modification causes a previously functional part of the program to fail, this error often goes undetected. Saving test cases and running them again after changes to other components of the program is known as regression testing.


Principle 8: Do not plan a testing effort under the tacit assumption that no errors will be found.

This is a mistake project managers often make and is a sign of the use of the incorrect definition of testing—that is, the assumption that testing is the process of showing that the program functions correctly.

Once again, the definition of testing is the process of executing a program with the intent of finding errors.


Principle 9: The probability of the existence of more errors in a section of a program is proportional to the number of errors already found in that section.

At first glance it makes little sense, but it is a phenomenon present in many programs. For instance, if a program consists of two modules, classes, or subroutines A and B, and five errors have been found in module A and only one error has been found in module B, and if module A has not been purposely subjected to a more rigorous test, then this principle tells us that the likelihood of more errors in module A is greater than the likelihood of more errors in module B.


Principle 10: Testing is an extremely creative and intellectually challenging task.

It is probably true that the creativity required in testing a large program exceeds the creativity required in designing that program. We already have seen that it is impossible to test a program sufficiently to guarantee the absence of all errors. Methodologies discussed later in this book let you develop a reasonable set of test cases for a program, but these methodologies still require a significant amount of creativity.