Jun 272011
 

So to summarize what is important for a SQA engineering team to have in mind while testing (read the first two parts here and here if you have not already):

  • Installation of the feature/product: No conflicts should be seen on a system level. Environment variables of the feature should not conflict other features.
  • Data complatibility between features: No integer variable should be sent by the feature to a process that expects a string of characters.
  • Feature compatibility: If a feature is expected to run after another feature has started, this check should be made in the begining.
  • Error generation: If a feature dies and noone can hear it screaming, the whole system depending on this feature can still die. Thus a non-critical bug for the feature can lead to a system crashing, which is quite fatal.
    • Strength of the code for this feature. If it breaks easily, and its existence is crucial, the whole system can break. Remember: A chain is as strong as its weakest link!
    • Performance of the feature. Find the possible bottlenecks before the customer finds them. It is better to have a well documented and tested product limitation in the specifications instead of an angry customer or a furious manager.
    • Recovery. If the feature breaks, can It restart and recover so it keeps working as best as possible with the scavenged data?
    • Backward compatibility: Can the system work with another system with older version of this feature?
    • Sensitivity. What happens if the feature receives non expected data from another feature and there is no failsafe mechanism?

    Those are few rules of thumb you should have while working as SQA engineer. Most of them are written in the era of software development but are still quite valid and important.

    Running all those tests on every minor or major fix is quite time consuming and but be automated nowadays. The repetitive testing for a specific feature on every major or minor patch is called

    Regression testing.

    This kind of testing is done to check if the system is still sane after the newly compiled release and most of the test categories can be skipped.

    It is important though to not miss testing the basics are working. So after the installation, data and feature compatibilities are automatically tested and some important tests as parameters check have the results, we go to the real regression tests. They often go without a plan and are an analysis of the patched bugs and their interoperability with old fixes:

    • What if the new fixes are not working?
    • What if the new fix opens new bug?
    • What if the new fix breaks old fix?

    This kind of testing is important if the final product is to be present without old bugs reappearing in the code, while we think they are already fixed. While this kind of test builds confidence in the feature stability in different versions, It is too much time consuming and can drive the attention away from real new bugs that can make it through blind spots of the whole test plan, so most of the regression testing is human-observing-automated. And while it is time-consuming and can’t catch all the bugs – it provides the needed confidence when testing new growing features in their development cycle. Most of those tests are set on a single system and can’t catch the big bugs in fully operational environment. That’s why there is

    Stress testing.

    What is important to test on a feature that is supposed to work on a heavily populated server or a high traffic network?

    • Learn its boundaries and try to overwhelm them. e.g. Try to configure 4095 VLANs on a single port and flood all of them with traffic generator.
    • Try to overflow a buffer. e.g. Try 1 million administrative logins to a device under testing.
    • Try to flood the feature with enormous volume of data and see if the feature survives.
    • Try how the feature will operate in spartan conditions like low Memory or high CPU.

    While those tests are very importand, the test cases involved with them often catch one single bug in their whole life since the cases were originally written. That’s why we often do the so called

    Exploration/exploitation testing.

    We know the product, we know the feature, we know the code, we know EVERYTHING. That exactly is the bad part of our testing. The customer knows nothing except the configuration example in the manual. He needs the units for actual work, not for lab testing. So he/she starts building a big mesh-network of units and starts wiring them with cables and sets routing protocols. What happens? You guessed right – we missed some major blind spot. That’s why the exploitation testing is also very important.

    • Often unknown bugs are found in other feature while we test some new feature. It’s important the person responsible for this other feature to be alarmed and a new test case to be added in his test plan so next time this blind spot missed by him to be retested.
    • Sometimes the forum of your competitors or a news group can have some interesting bugs you can try to reproduce on the software or equipment you are developing.
    • And it happens, when you test the feature again and again on every major release – the developers got some old buggy code pasted in the new feature so everything breaks. You should be vigilant about what and where enters as new feature and what possible break points it can have. If you are not aware – than you probably will not like the job at all.

    The job description itself, may surprise you.

     

     Posted by at 8:41 pm
    Jun 272011
     

    If you don’t yet know what an SQA job is, please read the part 1

    Features.

    There are always some important aspects of a feature that MUST be tested and some that are optional or informal (e.g. “The feature can also work in that way but it’s not required by design”). Few thing must or can be known for a feature:

    • Sotware requirements.
    • Hardware requirements.
    • Customer requirements
    • Limitations.

    Optionally for testing recommendations can also be taken:

    • Development documents.
    • Informal documents such as forum talks, e-mail listings, foreign experience and other sources not in the design notes.
    • Improving previous tests not originally designed for this specific feature.

    Ways to test a feature.

    1. Parameter testing, we already mentioned that can be completely automated.
    2. Negative parameter testing. Also completely automatable.
    3. White box testing. We know how the feature is supposed to work and its internals by observing the code itself. This testing method is suitable for QA engineers that can analyze the code of the developers and provide some possible break points.
    4. Black box testing. We don’t know what exactly the features is supposed to have as input and what output it provides. This method is suitable for proven software or device hackers that can find a way for penetrating the service or the system tested. Sometimes a small script sending random parameters to the feature can find a major bug or critical memory leak.

    Implementation of the feature. (unit testing)

    Every feature is a part of a system. Having the feature working in the system can sometimes unravel even more bugs. An Inter Process Communication bug or even wrong parameters between 2 features in a system can lead to another major bug missed while testing the feature alone. While the feature is not fully integrated in the whole system, we can’t be sure it works @ 100% so we make another tests. An example of this can be a system driver implementing the VLAN support. We know we can assing this feature per device physical port but never tried to set it on a virtual port, because virtual ports are different feature designed for routing protocols. If we only tested the feature on physical ports, we can’t be sure it works on virtual ports etc.

    Dialog between systems.

    The next step in feature testing is when we are happy how the system implements this new feature.

    In most features, an interactive dialog between two systems is what actually the feature does. In the VLAN example I make often in this article, this would be the dialog between 2 ports on neighbouring systems:

    2 systems

    2 systems

    So, there will also be the additional tests in a system setup between 2 or more systems. Those tests will include the protocol acceptance between them, the right encapsulation of the packets, control sums and their decoding, the timings, delays and many others that can’t be tested if one system or the feature alone is under test.

    Some black box testing can be done also. We can inject wrong dialog between the 2 systems and see how they behave. What time it takes to resume normal operations and possible security breaches if a dialog is broken and spoofed dialog is injected instead of the expected one.

    Dialog with higher level features.

    The same example as before. Imagine the newly implemented VLAN feature has to be tested to build an MSTP ring in a larger scale of  4 core switch units with one dozen more border/demarcation switch units outside the main ring. This example is one of the projects I work on often. The features we test are not as simple as the VLAN, but it takes big networks built in a lab and often new bugs pop up, so It really pays having a big system to test a basic feature. Test cases such as this last one takes lot of time to develop and the actual test can take also lot of time.

    It is really important though.

    Dialog with foreign or competitor systems. (compatibility)

    One also very important test case, that also includes having much more expensive lab equipment. Imagine a ring with Cisco, 3Com, HP, Extreme networks and Telco systems core switches. All of them have implementation of Virtual LAN per port. You can make a simple test case for multi Vendor testing or even an MSTP ring, R-APS ring or MPLS mesh. It takes time to develop and additional knowledge of the competition’s equipment.

    MOST if not ALL of the QA labs have such test cases. Your management team should be aware of the trends in the other brands and will probably have some units in the lab for compatibility tests.

     Posted by at 7:42 pm
    Jun 272011
     

    I had to switch jobs 9 months ago, because my ex-boss almost got bancrupt because of the World Economic Crisis, so he was unable to pay me anymore. It took some research in the job market. Took me 2 months to choose what to do next. I’ve been jack-of-all-trades IT, support, programmer, manager, etc. Never been a QA in my life, so when the guys from Telco System called, I’ve said to myself “Why not? Never been a QA in my life.” 

    Ever wondered what the QAs are actually doing? I’ve done some research before the interview. The top google picks were around the so called Standard ISO 9001 for quality. Not telling much about what is expected from me to do. Not a single diagram what’s what. Not a single test plan to see an example. I’ve been a bit unprepared on the interview but managed well 😉
    I am writing this artice, so anyone in future searching “How to” or “What is” QA to have some ideas in his mind on what exactly is QA and how to live with it.

    Introduction.

    Software development and testing is done in some few overlapping steps, some of them lead to others, while some of them take the whole process back to the workbench to rewrite from scratch. It sometimes takes 3 months for a feature to enter the Development cycle and exit as a qualified and approved for publishing or selling to the customer.

    Software Test Cycle

    Software Test Cycle

    If you ever managed to look in this process while working as an IT or developer, you already know how many bugs are coded. Statistics gathered by leading development companies say that the code initially has at least 10 defects per ~500 lines or 20-25 bugs per KLOC (Kilo Lines Of Code) and every few fixes raise another (new) bug. By looking in the diagram you can say that the process is not straight forward. You may get a bug back for development and the developers can make a fix and take it back for you to analyze and retest. At least that is what an SQA team is doing. Testing, analyzing and providing feedback on the bugs found.

    Quality vs quantity.

    Every bug has it’s own weight and severity. If a bug is very severe (a critical one that can’t be left because the feature is actualy not working) that bug is served from the development team with highest priority. Cosmetic bugs (e.g. a misspelling or a typo in the interface) can be always served later. The product/feature is ready when all major bugs are resolved and all the visible minor bugs are taken care of.

    However, if the QA team focuses on catching all the minor bugs and missing even one severe bug, the whole process will be tainted and meaningless.

    Automation.

    The minor bugs can be found by automated testing software that is developed solely for this purpose. There is rarely any ready-out-of-the-box testing suite, so most of the companies have a QA developers that work on the test automation itself. Automated testing catches most of the minor bugs and provides “sanity” for a future testing. e.g If the system does not boot with the newly compiled software – future testing is obviously not sane. This minor version gets back to the developers for a review and recompilation.

    There are very good script languages able to login remotely to a testing unit, apply new configuration and check results such as TCL, Python, Perl and others.

    The QA job is to actually run the automated tests on every minor release and check if all the basics are working.

    Manual testing.

    That’s the bread and butter of this profession. For every new feature there must be a new test plan written. The test plan consist of test cases. By definition any test case should test only one aspect of the feature or only one of its parameters. A few steps example:

    1. Configure VLAN Id 200 on port 1/1/1
    2. Check that the running device configuration has the line.
    3. Configure out of range [1..4095] VLAN ID .
    4. Check that error message pops up.
    5. Check that the running device configuration does not have the line.

    The QA job is to design a good plan stretching to all the feature parameters and options and write test cases for every and each one of them. Test cases can be either just a parameter chec, negative check or mixed as this one. The good test plan has all the checks and the negative cases possible.

    Some examples of the above can be found in specific QA forums, but the information is scarce because every product has it’s own testing strategies and some of the information regarding the testing or the product, can be also well protected secret.

    Interested? Than keep reading.

     Posted by at 7:42 pm