I'm back after very hard work on Kaspersky Antivirus new Release. We successfully passed technical Release. Our automated testing is proven and valuable weapon for daily build testing on a lot of test stands simulataneously. The automation is really indispensable on this project since it provides significant coverage and feel of confidence about each build quality.
The official Beta release of Kaspersky Internet Security/Antivirus 2011 is available of Company's forum here:
The first thing i'd like to share with you is my thoughts about testability. This term can have relation to any software artifact: code, documentation, architecture, design, interface, structure, product and so on. In fact, the testability metric can be applied to any level of software, downside to code piece for unit testing. Let say if code piece is completely for protected usage, you can't pull interfaces of that class, so that testability is difficult and you need to invite workarounds like fake objects having rights to treat that class.
For routine functional automated testing, product testability plays key role for defining whether s/w suitable for automation or not. Formally I suggest to come up and register the requirements to product testability within Project book and Testing Strategy documents. Believe me or not but this is a ground and shield for you in developed project especially in strongly formalized projects. If you provide external test automation services these requirements are must as early as possible on a project. In essence, testability requirements is a basis of constructive proposals, negotiations and argumentations during project progress.
What kind of information should be listed out as requirements? Not sure I'm able to cover all things together but some general fitting to most projects are:
- Changes in software external GUI interfaces to be addressed as part of appropriate change management procedures
- Changes in software external non GUI interfaces to be addressed as part of appropriate change management procedures
- Changes in software internal tested interfaces to be addressed as part of appropriate change management procedures
- All changes to be planned, discussed and notified when it takes place
- All interfaces with which testing automated, should be frozen when the code freeze will take place
- Changes in software interfaces should not reduce capabilities of approved testing tools or significantly affect test automation codebase.
- All GUI objects should be recognizable by the use of approved testing tool
- All GUI objects should be uniquely identifiable by the use of approved testing tool
- All standard GUI objects' methods and properties to be accessible by the use of approved testing tool
- All intermediate and low-level software interface used for automated testing should be stable
- On All planned changes, appropriate persons should be notified prior the date of changes
- The data structures processed by test automation should provide access to each granular element using conventional methods. For instance, DB querying, object deserialization, XPath, Registry, CSV, INI parsing and so on.
- Access to GUI objects should be unified for identical object types. All custom objects should have minimal and similar access methods
- Custom controls not supported by approved test automation tools should be either enriched by typical access methods or special tool should be provided capable to recognize those objects
- Preferably, all typical interfaces should be identical between related projects, i.e. development should try to come up with cross-project solution of interfaces design
- Preferably, changes should be shown to automated testers before new deployment in order to adapt testing code in advance of deployment or releasing
PS subscribe to dev codebase changelist to be prepared for changes in unstable project.