Behaviour Driven Development

Behaviour driven development is a software development process drawn from test driven development. The application evolves to match expected behaviour which is requirement based i.e. user stories.

 

Screen Shot 2017-06-15 at 14.45.53.png

Test Environment

A test environment consists of elements that support test execution with software, hardware and network configuration. Test environment configuration must mimic the production environment as close as possible in order to uncover any environment/configuration related issues. Different types of environment include: development- where the developer writes their code, test- where functional and non-functional testing is conducted, acceptance- the product owner or user does acceptance testing after deploying to this environment after the print, production- finally a release is deployed to the production environment. Typical environmental configuration for web based applications include: Webserver –IIS/Apache, Database- MS SQL, OS- Windows/Linux, and Browser- IE/Firefox.

Priority

Each test condition must be prioritised; it helps define the order that test cases are executed. The priority of each test condition directly correlates to the priority of the test case; priority is decided by the subject matter expert, business analyst or stakeholder.

1 2 3 4 5
Critical to functionality i.e. log in page Major feature of the application, do all the links work? Moderate importance to the application i.e. filters Minor features of the website, easily worked around Cosmetic features of the application, i.e.  spelling mistakes

Manual Testing

In software testing manual testing is the process of manually reviewing and testing software for defects. The objectives of manual testing are: the improvement of software quality through finding defects and re-testing defect fixes, reducing the risk to the company by executing tests that prove the functionality work as intended and to provide test metrics to stakeholders so they can make the decision on the software release.

Phases of delivery:

  1. Test Preparation
  2. Test Execution
  3. Test Closure

Test preparation:

Test preparation ensures that all documentation and procedures are in place to begin commencing test execution as soon as the first software build is ready. Typical test preparation tasks include: clearly defining roles and responsibilities for the team during test execution, completing any test collateral, engaging scheduling and assigning tasks to test and setting up provisions for test environments. Deliverables include: the test plan, the test conditions, the test cases, test charters and test execution schedule.

Test execution:

The purpose of test execution is to execute all tests on the application and ensure that the project stakeholders are kept informed on all the important test metrics i.e. tests executed, defects raised, defects fixed. Test execution includes re-testing and regression testing on software fixes, raising defects, defect triage meetings with business stakeholders and developers and the evaluation of progress against test exit criteria. Typical deliverables include defect summary reports, test progress reports and test execution results

Test closure:

Test closure documents the final state of the application including metrics referencing test cases executed, tests passed defects outstanding and prepares for any testing that may be conducted after test closure. Tasks include the test closure report, the go-no go decision and the pre and post release of testing needs.

Test Planning

A test plan is created to outline how testing activities will be co-ordinated and delivered. The format and contents of the test plan should be based on the needs of the project. A formalised fully detailed approach may be required for complex code changes while a simple BAU (Business As Usual) change may only require a short presentation to outline key elements of the delivery. A walkthrough of the test plan with all project stakeholders should always take place to ensure that everything has been covered and fully understood. IEEE 829 defines the headers for the test plan.

Test Activities

User acceptance is testing conducted by the actual users of the system to confirm that the software can handle required in real world scenarios. Performance testing is the process of testing an application often including the hardware and back end architecture to assess whether a system still functions correctly under real world usage.

Performance testing:

Load testing: can the system handle real world load? A typical requirement of a system or application is that it must be able to be used by a set amount of users. Load testing places a real world load on a system to ensure performance is adequate. By recording response times it is possible to determine whether requirements have been met or not.

Stress testing: what is the system’s breaking point? The objective of stress testing is to increase load over time so that we can find the systems breaking point. Stress testing can be used to discover the level of load at which performance begins to degrade. Once defined organisations can then understand what the potential impact of this breaking point is and whether further optimisations need to be made.

Volume testing: does performance degrade over time? Volume testing can ensure that the system is able to handle a full day of use without fault. This type of testing serves as a system stability test. Often this type of test can only be carried out a limited number of times with each incremental release; it therefore serves as a final check for system reliability.

Operational acceptance testing is assessing whether the system that has been developed is capable of being supported operationally, this may include things such as: data system recovery, disaster recovery, maintainability, portability, supportability.

Test estimation

Test estimation is important to the SDLC and testing lifecycle, it provides an expectation on how many tests are required for a given piece of work and also how long these tests will take to create and execute. In providing this, the test team can give the stakeholders an idea of whether testing is feasible within the budget.

Screen Shot 2017-06-15 at 14.40.24.png

Test driven development

Test driven development is a process in which improvements are made on the results of running test scripts, these scripts are derived from the requirements

 

Test Condition Test Case ID Priority Requirement Reference
 Test that a user can log in to the application with valid details  APP-LOG-001 1  APP-LOGIN_1.1

Test conditions

When creating test conditions ensure that the following questions are considered

  • Are the requirements objective?
  • Are the requirements clear and unambiguous?
  • Are any business terms used within the requirement specification defined and explained?

 

Identify business requirements:

Pick out functionality from documents to ensure each is uniquely identifiable, clear and precise and objective and unambiguous. A test condition describes the characteristics of an individual requirement that can then be verified by a test case. A test condition is also known as a test title, there must be at least one test condition associated with each requirement (via requirement ID).

Rules of thumb:

  • Use absolute language, no “ifs”
  • Test conditions should always start with “test that”
  • Be to the point and concise.
  • Every positive test should have a negative counterpart

Test Tools

Tools

Manual testing is the basis for all automation testing, there are six categories of automation tools that assist in the test creation process.

Test management tools

Test management tools are tools tailored to organising the documentation related to testing and keeping track of the test process.  Requirement management tools compare the tests to the requirements for accountability and identify missing or incomplete requirements. Incident management tools cover contingencies and what to do when an incident actually occurs. Configuration management tools keep track of different software and tests, it is integral when working with more than one test environment.

Static Testing tools

Static testing tools conduct provide ways of keeping track of the review process and examining the code for defects popular categorisation of these tools include: review tools, which provide support throughout the review process, static analysis tools that find defects before test execution and looks at the structures and dependencies and modelling tools which find defects in data models, state models and object models.

Test specification tools

Test design tools are used in the creation of test conditions and test cases, these tools include: test design tools that generate test cases, requirement graphical user interfaces, data models and code and test data preparation tools that manipulate data in preparation for test, these include databases files and data transfers.

Test execution and logging

Test execution and logging tools centre on executing the test cases and tracking the testing carried out; there are several types of test execution and logging tools.

  • Test execution tools: these can pertain to automated testing, test recording, they may be data or keyword driven and they log the results of the tests.
  • Test harness/ unit test framework: these simulate the environment in which test objects are ran.
  • Test comparators: these identify the differences between the expected and actual outcomes during testing.
  • Coverage management tools: these measure the percentage of the code structure covered in white box testing
  • Security tools: these handle the protection of data and the prevention of viruses

Performance and monitoring tools

These tools operate in a live environment once the application has been launched. Dynamic analysis tools find defects when a program is ran, it is useful in detecting memory leaks. Performance load/stress testing tools monitor and report how the system behaves in various simulated environments. Monitoring tools continuously analyse, verify and report on usage of system resources.

Specific testing needs

These tools can relate to data quality assessment, comparing the files and databases to a format that has been specified in advance.

  • Static testing tools
  • Test specification tools
  • Execution and logging tools
  • Performance and monitoring tools
  • Specific testing needs tools

Selecting a tool for an organisation

  1. Analyse the problem: identify the strengths weaknesses and opportunities that exist within the organisation.
  2. Generate alternative solutions: ask questions such as, would outsourcing be cheaper?
  3. List any constraints and requirements the project has: know what type of product you need, is it the right product you are looking for?
  4. Evaluation and shortlist of potential tools: cut down potential software to the most beneficial.
  5. Make a proof of concept for the decided on tool: why is the project the best? Feasibility report.
  6. Negotiate with the vendor for a suitable price: discuss annual fees or any additional costs with the vendor
  7. Conduct a pilot project: release the product in a sample environment to check the overall suitability for the business.

Incident Report IEEE829

  1. Test incident report identifier: ID associated with the documentation.
  2. Summary: summary of any expected verses actual divergence.
  3. Incident description: detailed description of the incident this can include:
    1. Inputs
    2. Anomalies
    3. Environment
    4. Expected results
    5. Actual results
    6. Date and time
    7. Procedure steps
    8. comments
  4. Impact: what impact did the impact have on the progress overall.

Risk

Risk is the chance of an event, hazard, threat or situation occurring and resulting in undesirable consequences or potential problems.

Level of risk = probability x impact

  • Project risk is related to people.
  • Product risk is product focused.

Incident Management

Incident management comes under service management and is the logging of discrepancies between actual and expected outcomes of a test. Incidents need to be tracked from discovery and classification to the correction and confirmation of the solution. They provide the developer with clear feedback, the test lead with a way to track the quality of the system under test as well as provide ideas fir test process improvement.

Configuration Management

Configuration management is concerned with keeping track of the test process through version controlled documentation and test processes attributes of configuration management include:

  • Uniquely identified documentation.
  • Documentation that is version controlled.
  • Tracking of any alterations made.
  • Referenced unambiguously in all documentation.
  • Related to each other and developer items so that traceability and accountability can be maintained through the whole development process.

Test Control

Test control is related to assigning more resources to a given project, reprioritising tests when an identified project risk occurs. Alterations to the test schedule are updated whenever the test environment changes. Test control sets an entry criterion, requiring fixes to be retested by the development team before accepting the build.