I got the following question in my mail box a few days ago:
“I was reading the
Powerpoint deck you prepared for the East Anglia Branch of the BCS and wondered what your thoughts were around Testing.
How should testers tackle verbal requirements – as you say Requirements shouldn’t be a document but Testers rely on such documentation as check lists for their testing?
I would be interested as I find myself in an ‘Agile’ environment and this issue takes up a lot of our time. Any advice would be great.”
I think this is an interesting question, and I think we often ignore the role of testers in Agile software development. This is for two reasons: first Agile came from developers, hence it ignores testers as it does Product Management. Second, Developers don’t really want to admit the need for Testers. Developers always think they can do it right and nobody needs to check on them. Call it arrogance if you like.
Consequently testers have problems like this. So I’m happy to try and suggest some answers.
(By the way, if anyone else out there reading this blog has a question
please e-mail me the question, I’m more than happy to answer, especially if it gives me bloggable material.)
Short answer to this question is that Testers need to be involved in the same dialogue as the Developers and Product Managers. It is not so much the lack of a document that is the problem as unclear requirements, or requirements that are too specific, or Testers being cut out of the loop altogether.
People focus on the document because, despite its imperfections, it is the only thing that is available. The deeper problem is often that nobody pro-actively owns the business need. We somehow assume
developers know best.
(I’m using the term Product Manager here. In most organizations this person goes by the name of is a Business Analyst, Product Manager, Product Owner (in Scrum) or even Project Manager. What ever they are called it is the person who decides what should be in, and how it should be.)
If we break the problem down it comes in three parts:
1. What needs testing?
2. How should it perform?
3. How should it be tested?
The planning/scheduling system should supply the “what to test” part. If you are using a visual tracking system what is being worked on should be clear, and it should be clear when it is ready for test. (I’ve described this in
my Blue-White-Red process.)
If Software Testers are having problems knowing what to test it is usually a problem with the project tracking. In Blue-White-Red each piece of work is represented as a card, the card journey’s across the board an can’t finish its journey until a Tester approves. Testers might demand to see unit tests, see evidence of a code review or pair programming and may try the work themselves.
I think my correspondent is having a problem with the second question. After all, if you don’t have a written document how do you know how the software should perform? As I’ve said before, written documents are flawed - see
Requirements are a Dialogue not a Document from last year.
Testers need to be part of this dialogue, they need to be sitting there when the Product Manager and Developer discuss the work so they can understand what is required at the end. If the work is contentious, or poorly understood, they may take notes and confirm these with the Product Manager later. They may also devise tests to tell when these requirements are met.
The Testers role is in part to close the loop, their role follows from the Product Manager role not the Developers role. They need to understand the problem the Product Manager has and which the developer is solving so they can check the problem is solved.
If Testers are having a problem knowing how something should be then it is probably a question of Product Ownership and Management. When this role is understaffed or done badly people take short-cuts and it becomes difficult to know how things should be. One of the reason many people like a signed-off document is that is ensures this process is done - or at least, it ensures something is done. (I’ll give an example later). But freezing things for too long reduces flexibility and inhibits change.
Finally, the “how should it be tested” question, well, that is the testers speciality. That is their domain, so they get to decide that.
That’s the basic answer but there is more that needs to be said - sorry for the long answer.
In my experience there are four levels of testing, most have more than one, two is the norm, having all four is probably overkill.
User Acceptence Testing (UAT)Performed by Software Testers and Business Analysts, and/or Customer representatives. Formal test cycle using release candidates. Software is released with ‘release notes’ listing changes, additions, fixes, etc. Seldom found in ISVs (Independent Software Vendors) which produce shrink wrap software, conversely may be a regulatory requirement in environments like banking. Always conducted in dedicated UAT environment.
System / Integration TestingPerformed by Software Testers to ensure software works as expected when working with existing software and other systems. Usually, but not always, working with release candidates although these may not be as formal as in UAT. Usually conducted in dedicated system test environment.
Functional / Embedded / Close support TestingPerformed by Software Testers working closely with development teams. Tests any sort of build, usually from the (overnight) build environment, or potential release candidates, or even from developers machines. Helps to keep developers honest and bring testing skills and considerations right into the development team.
Developer / Unit TestingPerformed by developers on their own work before turinging over to anyone. Traditionally manual - perhaps using formal documents and “condition response” charts. Increasingly automated with tools like Junit and
Aeryn.
Notes:
1. I’ve deliberately excluded regression testing here, although it is important it is not a level in it’s own right. It is an activity that happens within one or more of these levels.
2. Different organizations use different names and different descriptions for these levels. I’m not concerned what they are called, these are the levels I see.
My correspondent didn’t mention what sort of testing they were doing. If they are working at the UAT or System Test level they should be getting Release Notes from the development team to tell them what has been changed - not how it should work but what to look for.
One of the tenants of Agile software development is higher quality. If the code the developers produce is of higher quality then there is less rework to be done, so there is less disruption and overall productivity rises. This is the old Philip Crosby
Quality is Free argument. Thus the overall approach is to inject more quality at the lower levels, i.e. Developer and Embedded Testing rather than UAT and System Test.
Good developers have always tested their own work. This has taken different forms over the years but is usually called Unit Testing. Nobody has ever recommended that developers slap some code down and pass it to testing. This might be what has happened sometimes but nobody has ever recommended it.
In compiled languages no developer would ever try to claim his work was finished until it at least compiled. And neither should they try to release it unless they have made some effort to ensure it does what it was supposed to.
What Agile software development does is suggest a set of practices - like TDD, pair-programming, code reviewing, etc. - which improves the developer testing side of the work in order to reduce the amount of work required later.
When this works code quality improves, Software Testers get to pass more work and raise fewer issues and the whole process works more smoothly. Testers can play a role here by keeping the developers honest, they should be asking: was the code reviewed? Have unit tests be written? Have the tests been added to the build scripts?
The pure-testing part of a Software Testers work will reduce over time thereby allowing them to focus on the really Quality Assurance aspect of their work. Unfortunately, in the software world the terms “Testing” and “Quality Assurance” are too often confused and exchanged.
So, as quality increases the amount of testing should reduce, and the amount of Quality Assurance increases - which again highlights the link between the Product Manager and the Tester.
One way in which Quality Assurance should work, and where Testers can help, is by defining, even quantifying, the acceptance criteria for a piece of work. Again this is picks up from Product Management.
For example:
In many “traditional” environments the Product Manager comes to the developers and says: “Make the system easier to use.” The Developers respond by putting in more menu items and other controls and the Tester is has to test the items exist and work.
What should happen is: the Product Manager comes to the Developers and Testers and says: “Users report the system is difficult and we would sell more if it was easier to use.” All three groups have a discussion about why the system is difficult to use, perhaps there is some research and user observation. Product Managers assess how many more sales there are to be had, Testers determine how they would measure “ease of use” - how they would know the system was easier to use - and the Developers come up with some options for making it easier.
Knowing how many more sales there are to be had determines how much effort is justified, and the test criteria help inform the selection of options.
As Agile development is introduced into an environment with legacy code it takes more time for this effect to come through. Even if Developers start producing higher quality code there is still a lot of legacy checks that need to be performed manually. Plus, as Developers productivity increases there is more to check.
The answer is that Testers need to bring in their own mechanisms for improving consistency and quality. This often takes the form of automated acceptance tests - perhaps using FIT or
Selenium.
In the longer term quality all round should increase and - this is the scary bit for software testers - the need for testers should reduce.
If all software was perfect the need for Software Testers would go away. The presence of Software Testers shows that something is not right. However, the same could be said about many other industries which employ Testers but the reality is testing is a multi-billion pound industry.
So, should Software Tester be worried about their jobs when Agile software development comes along? - some are. I remember one Test Manager who like the ideas behind Agile but could bring himself to embrace it. He saw it as a threat to his job and to his people.
But
No, I don’t think Software Testers should be worried about their jobs.
Even as quality increases there will be a need for some qualification that things are good. There are some things that still require human validation - perhaps a GUI design, or a page layout, or printer alignment.
In legacy systems it will be a long time before all code is unit tested, in fact I think it is unlikely to ever happen simply because it doesn’t need to. The
Power Law tells us that only part of the system needs to be covered by automated tests. Still the rest of the system needs checking so there is work for Testers.
As Testers work more closely with Business Analysts and Product Managers their knowledge and insights into the business will become greater allowing them to add more value - or even more into Business Analysis or Product Management themselves if they wish.
And if this weren’t enough the success of Agile should make more work for everyone. Agile teams are more productive and add more business value. Therefore the organization will succeed, therefore there will be more business, therefore there will be more work to do.
The summary of the long answer is: Agile does mean change for Software Testers, it means putting more emphasis on quality and less on testing, I think it was Shigeo Shingo of Toyota who said there were two types of inspection [testing]:
• Inspection to find defects is waste [so should be eliminated]
• Inspection to prevent defects is essential.
Software Testers need to look reconsider their work as the prevention of defects, not the detection of defects.
I hope that answers the question. Without knowing more about the organization set up its hard to be more specific. And I apologise for the length, in Blaise Pascal fashion, I was working fast.
If anyone else has a question just ask!