Agile

Agile development is an iterative process which focuses on the development team working in a flexible environment, in this model use the entire project team works in cycles toward demonstrable quality. There are four main approaches when working in agile:

RAD or rapid action development, this way of software development includes no planning in advance, adapting it as needed making it easier to make changes.

Lean is a “trim the fat” way of working, eliminating the redundancy while leaving the value. Its main aims are to eliminate unnecessary expenditure and its main goal is creating value for the client.

XP or extreme programming works with the mind-set of having the simplest coding solution for each of the requirements laid out.

Scrum is an iterative methodology that uses small teams to complete aspects of the software within a cycle. A scrum team consists of a scrum master, product owner and a development team, consisting of a developer, tester and business analyst.

Screen Shot 2017-06-15 at 14.48.17.png

Scrum Master:

The scrum master gives velocity to a project. Velocity is calculated by task length divided by the number of sprints needed +1. The scrum master is chiefly responsible for ensuring that the team receives no outside interference while working on the project and hosting the daily scum meetings to assess the progress made.Screen Shot 2017-06-15 at 14.48.43.png

 

Product Owner:

The product owner is the party responsible for providing what the product will be, they define the product backlog and dictate what the overall project the team will be working on.

Developer:

The developer is responsible for creating the software

Tester:

The tester is responsible for testing the software

Business Analyst:

The business analyst is responsible for ensuring the requirements are met.

Test Cases

Each test case should consist of sections covering: the test case ID, priority, test condition, test data, test steps expected results, actual results and defect id. The four main beliefs are agile are:

  • Individuals and interactions over process and tools
  • Working deliverables over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a planScreen Shot 2017-06-15 at 14.47.21.png

Manual Testing

In software testing manual testing is the process of manually reviewing and testing software for defects. The objectives of manual testing are: the improvement of software quality through finding defects and re-testing defect fixes, reducing the risk to the company by executing tests that prove the functionality work as intended and to provide test metrics to stakeholders so they can make the decision on the software release.

Phases of delivery:

  1. Test Preparation
  2. Test Execution
  3. Test Closure

Test preparation:

Test preparation ensures that all documentation and procedures are in place to begin commencing test execution as soon as the first software build is ready. Typical test preparation tasks include: clearly defining roles and responsibilities for the team during test execution, completing any test collateral, engaging scheduling and assigning tasks to test and setting up provisions for test environments. Deliverables include: the test plan, the test conditions, the test cases, test charters and test execution schedule.

Test execution:

The purpose of test execution is to execute all tests on the application and ensure that the project stakeholders are kept informed on all the important test metrics i.e. tests executed, defects raised, defects fixed. Test execution includes re-testing and regression testing on software fixes, raising defects, defect triage meetings with business stakeholders and developers and the evaluation of progress against test exit criteria. Typical deliverables include defect summary reports, test progress reports and test execution results

Test closure:

Test closure documents the final state of the application including metrics referencing test cases executed, tests passed defects outstanding and prepares for any testing that may be conducted after test closure. Tasks include the test closure report, the go-no go decision and the pre and post release of testing needs.

Test Planning

A test plan is created to outline how testing activities will be co-ordinated and delivered. The format and contents of the test plan should be based on the needs of the project. A formalised fully detailed approach may be required for complex code changes while a simple BAU (Business As Usual) change may only require a short presentation to outline key elements of the delivery. A walkthrough of the test plan with all project stakeholders should always take place to ensure that everything has been covered and fully understood. IEEE 829 defines the headers for the test plan.

Test Activities

User acceptance is testing conducted by the actual users of the system to confirm that the software can handle required in real world scenarios. Performance testing is the process of testing an application often including the hardware and back end architecture to assess whether a system still functions correctly under real world usage.

Performance testing:

Load testing: can the system handle real world load? A typical requirement of a system or application is that it must be able to be used by a set amount of users. Load testing places a real world load on a system to ensure performance is adequate. By recording response times it is possible to determine whether requirements have been met or not.

Stress testing: what is the system’s breaking point? The objective of stress testing is to increase load over time so that we can find the systems breaking point. Stress testing can be used to discover the level of load at which performance begins to degrade. Once defined organisations can then understand what the potential impact of this breaking point is and whether further optimisations need to be made.

Volume testing: does performance degrade over time? Volume testing can ensure that the system is able to handle a full day of use without fault. This type of testing serves as a system stability test. Often this type of test can only be carried out a limited number of times with each incremental release; it therefore serves as a final check for system reliability.

Operational acceptance testing is assessing whether the system that has been developed is capable of being supported operationally, this may include things such as: data system recovery, disaster recovery, maintainability, portability, supportability.

Test estimation

Test estimation is important to the SDLC and testing lifecycle, it provides an expectation on how many tests are required for a given piece of work and also how long these tests will take to create and execute. In providing this, the test team can give the stakeholders an idea of whether testing is feasible within the budget.

Screen Shot 2017-06-15 at 14.40.24.png

Test driven development

Test driven development is a process in which improvements are made on the results of running test scripts, these scripts are derived from the requirements

 

Test Condition Test Case ID Priority Requirement Reference
 Test that a user can log in to the application with valid details  APP-LOG-001 1  APP-LOGIN_1.1

Test conditions

When creating test conditions ensure that the following questions are considered

  • Are the requirements objective?
  • Are the requirements clear and unambiguous?
  • Are any business terms used within the requirement specification defined and explained?

 

Identify business requirements:

Pick out functionality from documents to ensure each is uniquely identifiable, clear and precise and objective and unambiguous. A test condition describes the characteristics of an individual requirement that can then be verified by a test case. A test condition is also known as a test title, there must be at least one test condition associated with each requirement (via requirement ID).

Rules of thumb:

  • Use absolute language, no “ifs”
  • Test conditions should always start with “test that”
  • Be to the point and concise.
  • Every positive test should have a negative counterpart

Test Tools

Tools

Manual testing is the basis for all automation testing, there are six categories of automation tools that assist in the test creation process.

Test management tools

Test management tools are tools tailored to organising the documentation related to testing and keeping track of the test process.  Requirement management tools compare the tests to the requirements for accountability and identify missing or incomplete requirements. Incident management tools cover contingencies and what to do when an incident actually occurs. Configuration management tools keep track of different software and tests, it is integral when working with more than one test environment.

Static Testing tools

Static testing tools conduct provide ways of keeping track of the review process and examining the code for defects popular categorisation of these tools include: review tools, which provide support throughout the review process, static analysis tools that find defects before test execution and looks at the structures and dependencies and modelling tools which find defects in data models, state models and object models.

Test specification tools

Test design tools are used in the creation of test conditions and test cases, these tools include: test design tools that generate test cases, requirement graphical user interfaces, data models and code and test data preparation tools that manipulate data in preparation for test, these include databases files and data transfers.

Test execution and logging

Test execution and logging tools centre on executing the test cases and tracking the testing carried out; there are several types of test execution and logging tools.

  • Test execution tools: these can pertain to automated testing, test recording, they may be data or keyword driven and they log the results of the tests.
  • Test harness/ unit test framework: these simulate the environment in which test objects are ran.
  • Test comparators: these identify the differences between the expected and actual outcomes during testing.
  • Coverage management tools: these measure the percentage of the code structure covered in white box testing
  • Security tools: these handle the protection of data and the prevention of viruses

Performance and monitoring tools

These tools operate in a live environment once the application has been launched. Dynamic analysis tools find defects when a program is ran, it is useful in detecting memory leaks. Performance load/stress testing tools monitor and report how the system behaves in various simulated environments. Monitoring tools continuously analyse, verify and report on usage of system resources.

Specific testing needs

These tools can relate to data quality assessment, comparing the files and databases to a format that has been specified in advance.

  • Static testing tools
  • Test specification tools
  • Execution and logging tools
  • Performance and monitoring tools
  • Specific testing needs tools

Selecting a tool for an organisation

  1. Analyse the problem: identify the strengths weaknesses and opportunities that exist within the organisation.
  2. Generate alternative solutions: ask questions such as, would outsourcing be cheaper?
  3. List any constraints and requirements the project has: know what type of product you need, is it the right product you are looking for?
  4. Evaluation and shortlist of potential tools: cut down potential software to the most beneficial.
  5. Make a proof of concept for the decided on tool: why is the project the best? Feasibility report.
  6. Negotiate with the vendor for a suitable price: discuss annual fees or any additional costs with the vendor
  7. Conduct a pilot project: release the product in a sample environment to check the overall suitability for the business.

Configuration Management

Configuration management is concerned with keeping track of the test process through version controlled documentation and test processes attributes of configuration management include:

  • Uniquely identified documentation.
  • Documentation that is version controlled.
  • Tracking of any alterations made.
  • Referenced unambiguously in all documentation.
  • Related to each other and developer items so that traceability and accountability can be maintained through the whole development process.