There are various types of automation test frameworks designed for different types of testing, broadly speaking they cover the following areas:
This type of testing assures the development and test teams that the behaviour contained within a unit of code (e.g. a Class) has been verified and validated against design specification acceptance criteria correctly. This is primarily a developer led exercise, using techniques such as T.D.D. to deliver > 80% test coverage of behaviour at class file level. It forms the bedrock of automated verification and regression testing, especially in an agile delivery model, as they are typically fast to run and fix, easy to maintain and do not depend on any external resources, therefore not susceptible to external influences / delays.
S.D.E.T.s and/or Automation Testers are commonly involved in reviewing and approving the test content and coverage, however, they should never write tests at this level.
In modern development life cycles, this form of testing is always the first to be kicked off as it provides all team members with assurances that all basic, common and the most important system functionality is behaving as it should in isolation. This means that mocks and stubs are commonly used by developers to simulate external boundary interaction between their class and others that are yet to be built and/or are unavailable to them at this level of testing.
The remainder of automated tests to be run cover 20% or less of the remaining functionality and they verify behaviour derived from integrated system components, system user functions and user driven end to end tests.
Click here to go through some automated unit test tutorials
The main purpose of integration testing is to ensure that new and/or changed behaviour in the form of units of code (classes, interfaces, etc.) behaves as expected when integrated with the development teams code base that represents the teams deliverables for any given software release interval.
These tests are more expensive to run and fix in the event of a failure, however they only cover about around 15% of remaining functionality in terms of behavioural verification and validation. This brings the optimal total test coverage up to 95% with just automated unit and integration testing. This sounds like we should just stop right there
Functional API Testing
Functional U.I. Testing