Digest of “Agile Testing Perspectives”

Finally I manged to read “Agile Testing Perspectives”, an anthology of essays by ThoughtWorkers on a bright sunday morning. It made for a great read and I wanted to share some interesting things in the book. This in no way, is to deter you from reading it yourself. The book can be downloaded from here.

The book starts with a testing technology timeline (some of which I had the opportunity to contribute) which is almost an exhaustive list of 24 testing tools / suites grouped by year of creation. It is an interesting visualization of how, over the years, open source testing tools have grown and commercial ones dwindled. It also visualizes how mobile testing tools have increased in the past 3 years.

The book starts with Anand Bagmar‘s essay “Is Selenium Finely Aged Wine?” ( I always wondered why ThoughtWorks’ testing conference had to be named “VodQA” by it’s creator Anand, but this title clarified a great deal! 🙂 ) Anand, in this info-packed article, starts with how Selenium started taking it’s baby steps in Software testing and goes on to elaborate what made Selenium popular and how Selenium is ubiquitous today across the industry. He leaves with a word of caution how Selenium should not be used ignoring the Test Pyramid and how such usage may lead to an irrecoverable loss to the project. At the end, he also gives advice for testers how they should be prepared for the future of testing.

The book moves on to Daniel Amorim‘s essay “Testing in an Agile Environment”. Daniel, in this article, writes about three dimensions that an Agile tester can be, “Business”, “Technical” and “DevOps”. He goes on to give a lucid understanding of the typical activities of testers in these different dimensions. One of the highlights of this essay is that Daniel also mentions the books that testers in each of these dimensions would be referring to. He rounds the article with what is common amongst these different dimensions. I could relate this article to my blog post on “Should testers be technical?” as it reinforces some of my opinions on the subject.

The next one in line is “Testing For Mobile” by AlabĂŞ Duarte and Fábio Maia. In this informative article, the duo emphasize on unit testing best practices like “TestDouble” that can be applied to Mobile Applications.

The book moves on to “BDD Style of Testing In Mobile Applications” by Prateek Baheti and Vishnu Karthik. The duo go over how BDD can help testing mobile applications by expressing testing in platform-agnostic domain-specific language. This nice tip is supported by code examples for better understanding.

The next in line is “Continuous Delivery for Mobile Applications” by Gayathri Mohan, who gives an in-depth account of how to setup Continuous Delivery for Mobile Hybrid Applications. She starts with emphasizing the need for CD in mobile and lists out the best practices. She goes on to build a cross-platform functional test framework for hybrid mobile apps using Page Object Pattern. She then outlines how to setup CI. This certainly is a reference for those trying to test hybrid apps and setup CD in their projects.

The next in line is “Challenges in Mobile Testing?” by Vikrant Chauhan and Sushant Choudhary, is a treasure-trove of mobile testing challenges. The exhaustive list of challenges in both mobile website testing and mobile application testing cover the depth and breadth of the problems one could encounter in mobile testing.

The book then moves on to “Three Misconceptions about BDD”, in which Nicholas Pufal and Juraci Vieira start with differentiating between “practicing BDD” and “using a BDD tool” and go on to adeptly bust the three myths of BDD, “The client doesn’t care about testing”, “The client doesn’t want to write the specifications” and “You can achieve the same without a business readable DSL” and strengthen their arguments with appropriate examples. The authors end the essay with an interesting case-study where they applied BDD successfully.

The last essay in the book is “Hiring Selenium QA People”. In this riveting essay, Paul Hammant speaks from his experiences about what to expect from Selenium QAs at different levels. He classifies these levels as progressive levels of “Inexperienced”, “Entry Level”, “Intermediate”, “Advanced” and “Expert”. At each level, he specifies possible backgrounds of candidates and adds interview advice. He leaves us in a chuckle at the end saying the “Expert” candidate could be “Simon Stewart”. :).

A great compilation! Thanks to all the authors who shared their thoughts!

Digest of “Agile Testing Perspectives”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s