With Love
I'm gonna post some useful Testing stuffs here,to share the knowledge, enjoy learning.............
Monday, December 31, 2007
What is Bug/Defect?
Bug is a difference between the actual & expected results
Expected result -> way that the application should behave ( i.e based on the Customer requirements) when some action/event is performed on it
It's before the execution
Actual result -> The results displayed after some actions performed on the application
It's after the execution
If the expected & Actual results doesn't matches then it is called as a Bug
I,e difference between what the application is intended to do before & after the execution
Effects of Bug:
Bugs can have a wide variety of effects, with varying levels of inconvenience to the user of the program.
Some bugs have only a subtle effect on the program's functionality, and may thus lie undetected for a long time.More serious bugs may cause the program to crash or freeze leading to a denial of service
Others qualify as security bugs and might for example enable a malicious user to bypass access controls in order to obtain unauthorized privileges.
Regards,
Pavankumar Nandagiri.......
Expected result -> way that the application should behave ( i.e based on the Customer requirements) when some action/event is performed on it
It's before the execution
Actual result -> The results displayed after some actions performed on the application
It's after the execution
If the expected & Actual results doesn't matches then it is called as a Bug
I,e difference between what the application is intended to do before & after the execution
Effects of Bug:
Bugs can have a wide variety of effects, with varying levels of inconvenience to the user of the program.
Some bugs have only a subtle effect on the program's functionality, and may thus lie undetected for a long time.More serious bugs may cause the program to crash or freeze leading to a denial of service
Others qualify as security bugs and might for example enable a malicious user to bypass access controls in order to obtain unauthorized privileges.
Regards,
Pavankumar Nandagiri.......
Saturday, December 29, 2007
What is Risk & it's Categories
"Risk are future uncertain events with a probability of occurrence and a potential for loss”
Risk identification and management are the main concerns in every software project. Effective analysis of software risks will help to effective planning and assignments of work.
Categories of risks:
Schedule Risk:Project schedule get slip when project tasks and schedule release risks are not addressed properly.
Schedule risks mainly affect on project and finally on company economy and may lead to project failure.
Budget Risk:
Operational Risks:
Risks of loss due to improper process implementation, failed system or some external events risks.Causes of Operational risks:
Technical risks:
Technical risks generally leads to failure of functionality and performance.
Causes of technical risks are:
1. Continuous changing requirements
2. No advanced technology available or the existing technology is in initial stages.
3. Product is complex to implement.
4. Difficult project modules integration.
Programmatic Risks:
These are the external risks beyond the operational limits. These are all uncertain risks are outside the control of the program.These external events can be:
1. Running out of fund.
2. Market development
3. Changing customer product strategy and priority
4. Government rule changes.
Regards,
Pavankumar nandagiri.........
Risk identification and management are the main concerns in every software project. Effective analysis of software risks will help to effective planning and assignments of work.
Categories of risks:
Schedule Risk:Project schedule get slip when project tasks and schedule release risks are not addressed properly.
Schedule risks mainly affect on project and finally on company economy and may lead to project failure.
Schedules often slip due to following reasons:
1. Wrong time estimation
2. Resources are not tracked properly. All resources like staff, systems, skills of individuals etc.
3. Failure to identify complex functionalities and time required to develop those functionalities.
4. Unexpected project scope expansions.
Budget Risk:
1. Wrong budget estimation.
2. Cost overruns
3. Project scope expansion
Operational Risks:
Risks of loss due to improper process implementation, failed system or some external events risks.Causes of Operational risks:
1. Failure to address priority conflicts
2. Failure to resolve the responsibilities
3. Insufficient resources
4. No proper subject training
5. No resource planning
6. No communication in team.
Technical risks:
Technical risks generally leads to failure of functionality and performance.
Causes of technical risks are:
1. Continuous changing requirements
2. No advanced technology available or the existing technology is in initial stages.
3. Product is complex to implement.
4. Difficult project modules integration.
Programmatic Risks:
These are the external risks beyond the operational limits. These are all uncertain risks are outside the control of the program.These external events can be:
1. Running out of fund.
2. Market development
3. Changing customer product strategy and priority
4. Government rule changes.
Regards,
Pavankumar nandagiri.........
Friday, December 28, 2007
Hi guys, I'm posting some information on Cookies......
Some Test cases for web application cookie testing:
The first obvious test case is to test if your application is writing cookies properly on disk. You can use the Cookie Tester application also if you don’t have any web application to test but you want to understand the cookie concept for testing.
Test cases:
1) As a Cookie privacy policy make sure from your design documents that no personal or sensitive data is stored in the cookie.
2) If you have no option than saving sensitive data in cookie make sure data stored in cookie is stored in encrypted format.
3) Make sure that there is no overuse of cookies on your site under test. Overuse of cookies will annoy users if browser is prompting for cookies more often and this could result in loss of site traffic and eventually loss of business.
4) Disable the cookies from your browser settings: If you are using cookies on your site, your sites major functionality will not work by disabling the cookies. Then try to access the web site under test. Navigate through the site. See if appropriate messages are displayed to user like "For smooth functioning of this site make sure that cookies are enabled on your browser". There should not be any page crash due to disabling the cookies. (Please make sure that you close all browsers, delete all previously written cookies before performing this test)
5) Accepts/Reject some cookies: The best way to check web site functionality is, not to accept all cookies. If you are writing 10 cookies in your web application then randomly accept some cookies say accept 5 and reject 5 cookies. For executing this test case you can set browser options to prompt whenever cookie is being written to disk. On this prompt window you can either accept or reject cookie. Try to access major functionality of web site. See if pages are getting crashed or data is getting corrupted.
6) Delete cookie: Allow site to write the cookies and then close all browsers and manually delete all cookies for web site under test. Access the web pages and check the behavior of the pages.
7) Corrupt the cookies: Corrupting cookie is easy. You know where cookies are stored. Manually edit the cookie in notepad and change the parameters to some vague values. Like alter the cookie content, Name of the cookie or expiry date of the cookie and see the site functionality. In some cases corrupted cookies allow to read the data inside it for any other domain. This should not happen in case of your web site cookies. Note that the cookies written by one domain say rediff.com can’t be accessed by other domain say yahoo.com unless and until the cookies are corrupted and someone trying to hack the cookie data.
8 ) Checking the deletion of cookies from your web application page: Some times cookie written by domain say rediff.com may be deleted by same domain but by different page under that domain. This is the general case if you are testing some ‘action tracking’ web portal. Action tracking or purchase tracking pixel is placed on the action web page and when any action or purchase occurs by user the cookie written on disk get deleted to avoid multiple action logging from same cookie. Check if reaching to your action or purchase page deletes the cookie properly and no more invalid actions or purchase get logged from same user.
9) Cookie Testing on Multiple browsers: This is the important case to check if your web application page is writing the cookies properly on different browsers as intended and site works properly using these cookies. You can test your web application on Major used browsers like Internet explorer (Various versions), Mozilla Firefox, Netscape, Opera etc.
10) If your web application is using cookies to maintain the logging state of any user then log in to your web application using some username and password. In many cases you can see the logged in user ID parameter directly in browser address bar. Change this parameter to different value say if previous user ID is 100 then make it 101 and press enter. The proper access message should be displayed to user and user should not be able to see other users account.
Regards,
PavanKumar Nandagiri......
The first obvious test case is to test if your application is writing cookies properly on disk. You can use the Cookie Tester application also if you don’t have any web application to test but you want to understand the cookie concept for testing.
Test cases:
1) As a Cookie privacy policy make sure from your design documents that no personal or sensitive data is stored in the cookie.
2) If you have no option than saving sensitive data in cookie make sure data stored in cookie is stored in encrypted format.
3) Make sure that there is no overuse of cookies on your site under test. Overuse of cookies will annoy users if browser is prompting for cookies more often and this could result in loss of site traffic and eventually loss of business.
4) Disable the cookies from your browser settings: If you are using cookies on your site, your sites major functionality will not work by disabling the cookies. Then try to access the web site under test. Navigate through the site. See if appropriate messages are displayed to user like "For smooth functioning of this site make sure that cookies are enabled on your browser". There should not be any page crash due to disabling the cookies. (Please make sure that you close all browsers, delete all previously written cookies before performing this test)
5) Accepts/Reject some cookies: The best way to check web site functionality is, not to accept all cookies. If you are writing 10 cookies in your web application then randomly accept some cookies say accept 5 and reject 5 cookies. For executing this test case you can set browser options to prompt whenever cookie is being written to disk. On this prompt window you can either accept or reject cookie. Try to access major functionality of web site. See if pages are getting crashed or data is getting corrupted.
6) Delete cookie: Allow site to write the cookies and then close all browsers and manually delete all cookies for web site under test. Access the web pages and check the behavior of the pages.
7) Corrupt the cookies: Corrupting cookie is easy. You know where cookies are stored. Manually edit the cookie in notepad and change the parameters to some vague values. Like alter the cookie content, Name of the cookie or expiry date of the cookie and see the site functionality. In some cases corrupted cookies allow to read the data inside it for any other domain. This should not happen in case of your web site cookies. Note that the cookies written by one domain say rediff.com can’t be accessed by other domain say yahoo.com unless and until the cookies are corrupted and someone trying to hack the cookie data.
8 ) Checking the deletion of cookies from your web application page: Some times cookie written by domain say rediff.com may be deleted by same domain but by different page under that domain. This is the general case if you are testing some ‘action tracking’ web portal. Action tracking or purchase tracking pixel is placed on the action web page and when any action or purchase occurs by user the cookie written on disk get deleted to avoid multiple action logging from same cookie. Check if reaching to your action or purchase page deletes the cookie properly and no more invalid actions or purchase get logged from same user.
9) Cookie Testing on Multiple browsers: This is the important case to check if your web application page is writing the cookies properly on different browsers as intended and site works properly using these cookies. You can test your web application on Major used browsers like Internet explorer (Various versions), Mozilla Firefox, Netscape, Opera etc.
10) If your web application is using cookies to maintain the logging state of any user then log in to your web application using some username and password. In many cases you can see the logged in user ID parameter directly in browser address bar. Change this parameter to different value say if previous user ID is 100 then make it 101 and press enter. The proper access message should be displayed to user and user should not be able to see other users account.
Regards,
PavanKumar Nandagiri......
Thursday, December 27, 2007
Do You Know What a USE Case IS?
A use case is a technique used in software and systems engineering to capture the functional requirements of a system. Use cases describe the interaction between a primary system actor—the initiator of the interaction—and the system itself, represented as a sequence of simple steps. Actors are something or someone which exist outside the system under study, and who (or which) take part in a sequence of activities in a dialogue with the system, to achieve some goal: they may be end users, other systems, or hardware devices. Each use case is a complete series of events, from the point of view of the actor
According to Bittner and Spence, "Use cases, stated simply, allow description of sequences of events that, taken together, lead to a system doing something useful.
Each use case describes how the actor will interact with the system to achieve a specific goal. One or more scenarios may be generated from each use case, corresponding to the detail of each possible way of achieving that goal. Use cases typically avoid technical jargon, preferring instead the language of the end user or domain expert. Use cases are often co-authored by systems analysts and end users. The UML use case diagram can be used to graphically represent an overview of the use cases for a given system
Regards,
Pavankumar Nandagiri.....
According to Bittner and Spence, "Use cases, stated simply, allow description of sequences of events that, taken together, lead to a system doing something useful.
Each use case describes how the actor will interact with the system to achieve a specific goal. One or more scenarios may be generated from each use case, corresponding to the detail of each possible way of achieving that goal. Use cases typically avoid technical jargon, preferring instead the language of the end user or domain expert. Use cases are often co-authored by systems analysts and end users. The UML use case diagram can be used to graphically represent an overview of the use cases for a given system
Regards,
Pavankumar Nandagiri.....
WHO DOES THE TESTING?
Software testing is not a one person job. It takes a team, but the team may be larger or smaller depending on the size and complexity of the application being tested. The programmer(s) who wrote the application should have a reduced role in the testing if possible. The concern here is that they’re already so intimately involved with the product and "know" that it works that they may not be able to take an unbiased look at the results of their labors.
Testers must be cautious, curious, critical but non-judgmental, and good communicators. One part of their job is to ask questions that the developers might find not be able to ask themselves or are awkward, irritating, insulting or even threatening to the developers.
1. How well does it work?
2. What does it mean to you that "it works"?
3. How do you know it works? What evidence do you have?
4. In what ways could it seem to work but still have something wrong?
5. In what ways could it seem to not work but really be working?
6. What might cause it to not to work well?
A good developer does not necessarily make a good tester and vice versa, but testers and developers do share at least one major trait—they itch to get their hands on the keyboard. As laudable as this may be, being in a hurry to start can cause important design work to be glossed over and so special, subtle situations might be missed that would otherwise be identified in planning. Like code reviews, test design reviews are a good sanity check and well worth the time and effort.
Testers are the only IT people who will use the system as heavily an expert user on the business side. User testing almost invariably recruits too many novice business users because they’re available and the application must be usable by them. The problem is that novices don’t have the business experience that the expert users have and might not recognize that something is wrong. Testers from IT must find the defects that only the expert users will find because the experts may not report problems if they’ve learned that it's not worth their time or trouble.
Key Players and Their Roles
Business sponsor(s) and partners:
1. Provides funding
2. Specifies requirements and deliverables
3. Approves changes and some test results
Project manager Plans and manages the project
Software developer(s) :
1. Designs, codes, and builds the application
2. Participates in code reviews and testing
3. Fixes bugs, defects, and shortcomings
Testing Coordinator(s) :
Creates test plans and test specifications based on the requirements and functional, and technical documents
Tester(s) Executes the tests and documents results
Regards,
Nandagiri Pavankumar
Testers must be cautious, curious, critical but non-judgmental, and good communicators. One part of their job is to ask questions that the developers might find not be able to ask themselves or are awkward, irritating, insulting or even threatening to the developers.
1. How well does it work?
2. What does it mean to you that "it works"?
3. How do you know it works? What evidence do you have?
4. In what ways could it seem to work but still have something wrong?
5. In what ways could it seem to not work but really be working?
6. What might cause it to not to work well?
A good developer does not necessarily make a good tester and vice versa, but testers and developers do share at least one major trait—they itch to get their hands on the keyboard. As laudable as this may be, being in a hurry to start can cause important design work to be glossed over and so special, subtle situations might be missed that would otherwise be identified in planning. Like code reviews, test design reviews are a good sanity check and well worth the time and effort.
Testers are the only IT people who will use the system as heavily an expert user on the business side. User testing almost invariably recruits too many novice business users because they’re available and the application must be usable by them. The problem is that novices don’t have the business experience that the expert users have and might not recognize that something is wrong. Testers from IT must find the defects that only the expert users will find because the experts may not report problems if they’ve learned that it's not worth their time or trouble.
Key Players and Their Roles
Business sponsor(s) and partners:
1. Provides funding
2. Specifies requirements and deliverables
3. Approves changes and some test results
Project manager Plans and manages the project
Software developer(s) :
1. Designs, codes, and builds the application
2. Participates in code reviews and testing
3. Fixes bugs, defects, and shortcomings
Testing Coordinator(s) :
Creates test plans and test specifications based on the requirements and functional, and technical documents
Tester(s) Executes the tests and documents results
Regards,
Nandagiri Pavankumar
Wednesday, December 26, 2007
How to write a good Defect Report
Here is the key(s)
1. Be very specific when describing the bug. Don’t let there be any room for interpretation. More concise means less ambiguous, so less clarification will be needed later on.
2. Calling windows by their correct names (by the name displayed on the title bar) will eliminate some ambiguity.
3. Don’t be repetitive. Don’t repeat yourself. Also, don’t say things twice or three times.
4. Try to limit the number of steps to recreate the problem. A bug that is written with 7 or more steps can usually become hard to read. It is usually possible to shorten that list.
5. Start describing with where the bug begins, not before. For example, you don't have to describe how to load and launch the application if the application crashes on exit.
6. Proofreading the bug report is very important. Send it through a spell checker before submitting it.
7. Make sure that all step numbers are sequenced. (No missing step numbers and no duplicates.)
8. Please make sure that you use sentences. This is a sentence. This not sentence.
9. Don’t use a condescending or negative tone in your bug reports. Don’t say things like "It's still broken", or "It is completely wrong".
10. Don’t use vague terms like "It doesn’t work" or "not working properly"
11. If there is an error message involved, be sure to include the exact wording of the text in the bug report. If there is a GPF (General Protection Fault) be sure to include the name of the module and address of the crash.
12. Once the text of the report is entered, you don’t know whose eyes will see it. You might think that it will go to your manager and the developer and that’s it, but it could show up in other documents that you are not aware of, such as reports to senior management or clients, to the company intranet, to future test scripts or test plans. The point is that the bug report is your work product, and you should take pride in your work.
Hope this will help you to write a proper Bug Report.
Regards,
Nandagiri PavanKumar...
1. Be very specific when describing the bug. Don’t let there be any room for interpretation. More concise means less ambiguous, so less clarification will be needed later on.
2. Calling windows by their correct names (by the name displayed on the title bar) will eliminate some ambiguity.
3. Don’t be repetitive. Don’t repeat yourself. Also, don’t say things twice or three times.
4. Try to limit the number of steps to recreate the problem. A bug that is written with 7 or more steps can usually become hard to read. It is usually possible to shorten that list.
5. Start describing with where the bug begins, not before. For example, you don't have to describe how to load and launch the application if the application crashes on exit.
6. Proofreading the bug report is very important. Send it through a spell checker before submitting it.
7. Make sure that all step numbers are sequenced. (No missing step numbers and no duplicates.)
8. Please make sure that you use sentences. This is a sentence. This not sentence.
9. Don’t use a condescending or negative tone in your bug reports. Don’t say things like "It's still broken", or "It is completely wrong".
10. Don’t use vague terms like "It doesn’t work" or "not working properly"
11. If there is an error message involved, be sure to include the exact wording of the text in the bug report. If there is a GPF (General Protection Fault) be sure to include the name of the module and address of the crash.
12. Once the text of the report is entered, you don’t know whose eyes will see it. You might think that it will go to your manager and the developer and that’s it, but it could show up in other documents that you are not aware of, such as reports to senior management or clients, to the company intranet, to future test scripts or test plans. The point is that the bug report is your work product, and you should take pride in your work.
Hope this will help you to write a proper Bug Report.
Regards,
Nandagiri PavanKumar...
What Do We Test
WHAT DO WE TEST?
First, test what’s important. Focus on the core functionality—the parts that are critical or popular—before looking at the ‘nice to have’ features. Concentrate on the application’s capabilities in common usage situations before going on to unlikely situations. For example, if the application retrieves data and performance is important, test reasonable queries with a normal load on the server before going on to unlikely ones at peak usage times. It’s worth saying again: focus on what’s important. Good business requirements will tell you what’s important.
The value of software testing is that it goes far beyond testing the underlying code. It also examines the functional behavior of the application. Behavior is a function of the code, but it doesn’t always follow that if the behavior is "bad" then the code is bad. It’s entirely possible that the code is solid but the requirements were inaccurately or incompletely collected and communicated. It’s entirely possible that the application can be doing exactly what we’re telling it to do but we’re not telling it to do the right thing.
A comprehensive testing regime examines all components associated with the application. Even more, testing provides an opportunity to validate and verify things like the assumptions that went into the requirements, the appropriateness of the systems that the application is to run on, and the manuals and documentation that accompany the application. More likely though, unless your organization does true "software engineering" (think of Lockheed- Martin, IBM, or SAS Institute) the focus will be on the functionality and reliability of application itself.
Testing can involve some or all of the following factors. The more, the better.
1. Business requirements
2. Functional design requirements
3. Technical design requirements
4. Regulatory requirements
5. Programmer code
6. Systems administration standards and restrictions
7. Corporate standards
8. Professional or trade association best practices
9. Hardware configuration
10. Cultural issues and language differences
First, test what’s important. Focus on the core functionality—the parts that are critical or popular—before looking at the ‘nice to have’ features. Concentrate on the application’s capabilities in common usage situations before going on to unlikely situations. For example, if the application retrieves data and performance is important, test reasonable queries with a normal load on the server before going on to unlikely ones at peak usage times. It’s worth saying again: focus on what’s important. Good business requirements will tell you what’s important.
The value of software testing is that it goes far beyond testing the underlying code. It also examines the functional behavior of the application. Behavior is a function of the code, but it doesn’t always follow that if the behavior is "bad" then the code is bad. It’s entirely possible that the code is solid but the requirements were inaccurately or incompletely collected and communicated. It’s entirely possible that the application can be doing exactly what we’re telling it to do but we’re not telling it to do the right thing.
A comprehensive testing regime examines all components associated with the application. Even more, testing provides an opportunity to validate and verify things like the assumptions that went into the requirements, the appropriateness of the systems that the application is to run on, and the manuals and documentation that accompany the application. More likely though, unless your organization does true "software engineering" (think of Lockheed- Martin, IBM, or SAS Institute) the focus will be on the functionality and reliability of application itself.
Testing can involve some or all of the following factors. The more, the better.
1. Business requirements
2. Functional design requirements
3. Technical design requirements
4. Regulatory requirements
5. Programmer code
6. Systems administration standards and restrictions
7. Corporate standards
8. Professional or trade association best practices
9. Hardware configuration
10. Cultural issues and language differences
SOFTWARE TESTING
WHAT IS SOFTWARE TESTING?
Software testing is a process of verifying and validating that a software application or program
1. Meets the business and technical requirements that guided its design and development, and 2. Works as expected.
Software testing also identifies important defects, flaws, or errors in the application code that must be fixed. The modifier "important" in the previous sentence is, well, important because defects must be categorized by severity .
During test planning we decide what an important defect is by reviewing the requirements and design documents with an eye towards answering the question "Important to whom?" Generally speaking, an important defect is one that from the customer’s perspective affects the usability or functionality of the application. Using colors for a traffic lighting scheme in a desktop dashboard may be a no-brainer during requirements definition and easily implemented during development but in fact may not be entirely workable if during testing we discover that the primary business sponsor is color blind. Suddenly, it becomes an important defect.
The quality assurance aspect of software development—documenting the degree to which the developers followed corporate standard processes or best practices—is not addressed in this paper because assuring quality is not a responsibility of the testing team. The testing team cannot improve quality; they can only measure it, although it can be argued that doing things like designing tests before coding begins will improve quality because the coders can then use that information while thinking about their designs and during coding and debugging.
Software testing has three main purposes: verification, validation, and defect finding.
1. The verification process confirms that the software meets its technical specifications. A "specification" is a description of a function in terms of a measurable output value given a specific input value under specific preconditions. A simple specification may be along the line of "a SQL query retrieving data for a single account against the multi-month account-summary table must return these eight fields
Software testing is a process of verifying and validating that a software application or program
1. Meets the business and technical requirements that guided its design and development, and 2. Works as expected.
Software testing also identifies important defects, flaws, or errors in the application code that must be fixed. The modifier "important" in the previous sentence is, well, important because defects must be categorized by severity .
During test planning we decide what an important defect is by reviewing the requirements and design documents with an eye towards answering the question "Important to whom?" Generally speaking, an important defect is one that from the customer’s perspective affects the usability or functionality of the application. Using colors for a traffic lighting scheme in a desktop dashboard may be a no-brainer during requirements definition and easily implemented during development but in fact may not be entirely workable if during testing we discover that the primary business sponsor is color blind. Suddenly, it becomes an important defect.
The quality assurance aspect of software development—documenting the degree to which the developers followed corporate standard processes or best practices—is not addressed in this paper because assuring quality is not a responsibility of the testing team. The testing team cannot improve quality; they can only measure it, although it can be argued that doing things like designing tests before coding begins will improve quality because the coders can then use that information while thinking about their designs and during coding and debugging.
Software testing has three main purposes: verification, validation, and defect finding.
1. The verification process confirms that the software meets its technical specifications. A "specification" is a description of a function in terms of a measurable output value given a specific input value under specific preconditions. A simple specification may be along the line of "a SQL query retrieving data for a single account against the multi-month account-summary table must return these eight fields
- ordered by month within 3 seconds of submission."
2. The validation process confirms that the software meets the business requirements. A simple example of a business requirement is "After choosing a branch office name, information about the branch’s customer account managers will appear in a new window. The window will present manager identification and summary information about each manager’s customer base:
- ." Other requirements provide details on how the data will be summarized, formatted and displayed.
3. A defect is a variance between the expected and actual result. The defect’s ultimate source may be traced to a fault introduced in the specification, design, or development (coding) phases.
Sunday, December 23, 2007
Types Of Testing:
The development process involves various types of testing. Each test type addresses a specific testing requirement. The most common types of testing involved in the development process are:
• Unit Test.
• System Test
• Integration Test
• Functional Test
• Performance Test
• Beta Test
• Acceptance Test.
Unit Testing:
The first test in the development process is the unit test. The source code is normally divided into modules, which in turn are divided into smaller units called units. These units have specific behavior. The test done on these units of code is called unit test. Unit test depends upon the language on which the project is developed. Unit tests ensure that each unique path of the project performs accurately to the documented specifications and contains clearly defined inputs and expected results.
System Test:
Several modules constitute a project. If the project is long-term project, several developers write the modules. Once all the modules are integrated, several errors may arise. The testing done at this stage is called system test.
System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points.
Testing a specific hardware/software installation. This is typically performed on a COTS (commercial off the shelf) system or any other system comprised of disparent parts where custom configurations and/or unique installations are the norm.
Functional Test:
Functional test can be defined as testing two or more modules together with the intent of finding defects, demonstrating that defects are not present, verifying that the module performs its intended functions as stated in the specification and establishing confidence that a program does what it is supposed to do.
Acceptance Testing:
Testing the system with the intent of confirming readiness of the product and customer acceptance.
Adhoc Testing:
Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an adjunct to formal testing. If carried out by a skilled tester, it can often find problems that are not caught in regular testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed. Sometimes ad hoc testing is referred to as exploratory testing.
Alpha Testing:
Testing after code is mostly complete or contains most of the functionality and prior to users being involved. Sometimes a select group of users are involved. More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department.
• Unit Test.
• System Test
• Integration Test
• Functional Test
• Performance Test
• Beta Test
• Acceptance Test.
Unit Testing:
The first test in the development process is the unit test. The source code is normally divided into modules, which in turn are divided into smaller units called units. These units have specific behavior. The test done on these units of code is called unit test. Unit test depends upon the language on which the project is developed. Unit tests ensure that each unique path of the project performs accurately to the documented specifications and contains clearly defined inputs and expected results.
System Test:
Several modules constitute a project. If the project is long-term project, several developers write the modules. Once all the modules are integrated, several errors may arise. The testing done at this stage is called system test.
System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points.
Testing a specific hardware/software installation. This is typically performed on a COTS (commercial off the shelf) system or any other system comprised of disparent parts where custom configurations and/or unique installations are the norm.
Functional Test:
Functional test can be defined as testing two or more modules together with the intent of finding defects, demonstrating that defects are not present, verifying that the module performs its intended functions as stated in the specification and establishing confidence that a program does what it is supposed to do.
Acceptance Testing:
Testing the system with the intent of confirming readiness of the product and customer acceptance.
Adhoc Testing:
Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an adjunct to formal testing. If carried out by a skilled tester, it can often find problems that are not caught in regular testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed. Sometimes ad hoc testing is referred to as exploratory testing.
Alpha Testing:
Testing after code is mostly complete or contains most of the functionality and prior to users being involved. Sometimes a select group of users are involved. More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department.
Friday, December 21, 2007
Welcome All
Hi All,
Welcome to my Testing Site. I'm gonna post some useful Testing stuffs here, enjoy learning.
Cheers,
Pavan
Welcome to my Testing Site. I'm gonna post some useful Testing stuffs here, enjoy learning.
Cheers,
Pavan
Subscribe to:
Comments (Atom)