Test automation - parallelism and the questions it raises
The usual disclaimer of 'I don't know what I'm talking about' applies here...
Lately we have been feeling some software quality hurt - not enough time, not enough people testing, stuff still getting missed and reported as bugs by the customer. As this is a topic not unfamiliar to me, I decided to do some deeper research, and even attend a local user group to discuss test automation.
I already knew that 'GUI test bad! Unit/API test good!', but confirmed with multiple developers that our legacy app simply had a lot tied into the GUI. That's ok - legacy is where your revenue is, as someone pointed out (possibly at DevOpsDays Toronto?) - so we know that we kinda need GUI testing.
The user group's consensus was 'you need a test framework', so I figured we needed a test framework and set about building one. There were a lot of bad reasons for me being the person to do this...but a few good ones, too:
D:
I had played with parallelism early on in the framework's life, mainly because I knew how impactful that change would be, but never got anywhere with it. Now it was super duper a requirement. So I went back to the problem again and ... got nowhere. I did, however, manage to get Sauce Labs' example framework running in parallel, but as soon as I tried to transpose that into some of my test cases, nothing worked.
More digging later and we have this: Parallelism is hard! In general! Even the guy who did the initial Pluralsight course I used later on in the comments essentially said 'avoid it if you can' (granted, a few years ago). The general avoidance tactic is to simply throw more starting agents at it, and use targeted test categories.
The crux of the matter...how badly do you want this?
So I would suggest that these questions will aid you on your journey:
This is what we'll be working towards...I'm sure there will be a report later this year...
Lately we have been feeling some software quality hurt - not enough time, not enough people testing, stuff still getting missed and reported as bugs by the customer. As this is a topic not unfamiliar to me, I decided to do some deeper research, and even attend a local user group to discuss test automation.
I already knew that 'GUI test bad! Unit/API test good!', but confirmed with multiple developers that our legacy app simply had a lot tied into the GUI. That's ok - legacy is where your revenue is, as someone pointed out (possibly at DevOpsDays Toronto?) - so we know that we kinda need GUI testing.
The user group's consensus was 'you need a test framework', so I figured we needed a test framework and set about building one. There were a lot of bad reasons for me being the person to do this...but a few good ones, too:
- I actually had some time to spare.
- I was responsible for deployments/manual post-deploy verification
- I was newly responsible for manually smoketesting the staging environment prior to a deploy
- I was responsible for monitoring, most relevant being New Relic Scripted Browser Synthetics
Chief among the bad reasons for me writing (anything, really) a test framework was the whole 'I have zero development experience'. The guy teaching the Pluralsight course (Creating an Automated Testing Framework With Selenium) did a good job though, and I managed to hack together something that not only worked, but was relatively easy for me to expand!
\o/
So here we are - now have 30-40 pages covered by tests, a 1000-line long helper class (the worst copy pasta), and then I see the browser/platform matrix from QA for the new project coming online soon...
D:
I had played with parallelism early on in the framework's life, mainly because I knew how impactful that change would be, but never got anywhere with it. Now it was super duper a requirement. So I went back to the problem again and ... got nowhere. I did, however, manage to get Sauce Labs' example framework running in parallel, but as soon as I tried to transpose that into some of my test cases, nothing worked.
More digging later and we have this: Parallelism is hard! In general! Even the guy who did the initial Pluralsight course I used later on in the comments essentially said 'avoid it if you can' (granted, a few years ago). The general avoidance tactic is to simply throw more starting agents at it, and use targeted test categories.
The crux of the matter...how badly do you want this?
So I would suggest that these questions will aid you on your journey:
- Why do you need (GUI) test automation? What do you seek to improve?
- How will you know it has improved? How will you demonstrate that more (or less) effort is required?
- Is the team behind this? How will this effort persist?
This is what we'll be working towards...I'm sure there will be a report later this year...
Comments
Post a Comment