Talking testing, agility and automation... and anything else.


Test Automation By Example With Sahi

Matrix-CodeI created the following example test script to illustrate some practices I’ve found useful when writing test automation. This example is written in Sahi but these practices are tool-agnostic and could be transfered to any tool (though Sahi is a great one!). For our example, we’ll be using MyTrashMail.com–a fine testing resource–as the SUT (system under test). Our goal is simply to confirm that an email can be deleted from the system.

The Test

Size Matters!

The first thing you may notice is the size of the test… it’s quite small and size does matter! Barring end-to-end or scenario tests, small tests are best. They should be as singular and short as possible. 20 lines of code or less is a good rule-of-thumb, as is a single assert in each test. Long tests and multiple asserts are signs that you might be testing more than one thing. Short, concise tests are easier to read and maintain, and lend themselves to be run in parallel (multiple tests running at once).

Filenames as documentation

You may have glanced right over another practice I find invaluable… the script filename (which I’ve placed in the header comment): “0001.delete.random.email.from.indbox.and.confirm.deletion.sah”. I’m a big fan of using code for documentation and that definitely includes filenames. Test script filenames (or class names), like bug titles, help to concisely communicate what is being tested. In this way, a list of filenames from your test suite could double as an impromptu test plan/test script.

Having an ID in the filename is also very helpful. This ID could be the story card ID, bug ID or just an incremental ID. This will allow you to group like files together, aid in searches, and provide a cross-reference from the test to associated story/bug should you need it (and you probably will).

Script Walkthrough

Now let’s walk through the script a bit… The first two lines just include the script’s associated functions (more on this in a minute) and sets the MyTrashMail account name we’ll be using.

The next line uses Sahi’s built-in javascript method _navigateTo to navigate to our system under test, MyTrashMail.com. Sometimes there’s no need to reinvent the wheel as Sahi’s built-in methods are quite good.

Abstraction Via Functions

Next, instead of using Sahi code to enter our email account name, we break it out into a function for a couple of reasons. First, the function name checkEmailAccount() is more descriptive than the Sahi code and aids readability, and second, it can help maintainability. For example, should we use this function in multiple scripts, and should the application code change, we need only update the function to fix multiple tests. Both of these improvements could likely see further enhancement if we also incorporated page objects… but that’s a topic for another day.

Now we’re into the heart of the test… we need to click into an existing email and here again we’ll break this into a function which will greatly help to streamline the code. This could be handled a number of ways but strategically adding a little randomness into tests can expand coverage, and enhance the possibility of the script finding a bug. Thus, we create clickRandomEmailLink() to do the dirty work. As an added bonus, the function also provides some error handling and returns the random email’s url for use later in the script!

Functions

As you can see, clickRandomEmailLink() gets the number of emails in the account by counting the number of little mail icons (messagestatus0.gif (thanks MyTrashMail devs!)). If there is at least 1 email, we then pick a random number between 0 and the $numDisplayedEmails and use that to select the Nth link under the Subject column (_cell("Subject")), in the Table1 table (_table("Table1")).

We also return the browser’s current URL… more on that in a bit.

If there are no emails in the account (and the test currently expects at least one to exist), the script fails and logs the failure via Sahi’s method _logExceptionAsFailure().

With me?

Found Data

It’s important to note that we’re not creating an email in this script; we don’t know (and don’t need to know) anything about the email. We’re boldly jumping in and finding what’s there in the system, and testing it.

In this example, we’re using a popular account name “trash” that seems to have a continuous number of emails coming in. In other “normal” instances, we would have another script to test email creation elsewhere (and likely more than one). These tests would then feed off of each other… one creating and one deleting.

Final Stretch

Finally, our work pays off… we delete the email. We could have just left the Sahi code in the script but deleteEmail() reads a bit better…

And now that we’ve successfully deleted the email, we use the $emailURL that was returned by clickRandomEmailLink() to return us to the scene of the crime… so to speak. We navigate back to the email we deleted to assert that it’s actually been deleted. I use this practice a lot when jumping back and forth in the app. It ensures that we’re testing the right element. Eg. you could also have returned the email subject but there could be multiple emails with the same subject. Plus, in the log, we’ll have a direct link to the email should we need to investigate.

And lastly, we come to the assertion itself. As I mentioned, I’m a fan of one assertion per test. In addition, I like to put them on their own line separated by white space to aid in scanning for them. In our case we simply assert that the delete message is displayed and in doing so, make use of Shahi’s ability to use regular expressions (an absolutely fantastic feature).

Summary

Even with such a simple test, there’s a lot of good insights to observe. Here again are some of the key points…

  • Short tests are best. Keep them singular and simple
  • Use naming conventions that aid readability and can double as documentation
  • Abstract large blocks of code into functions
  • Use strategic randomness to expand coverage
  • Make use of the data available in the system
  • One assertion per test

You can copy and paste the code examples above or download them here.

Plusone Linkedin Facebook Twitter Digg Email

10 comments

  1. Mercedes Moccia

    Dear Brian Ray, hi…
    I like very much yr article.. I am an automation tester for 6 years (R. Robot and QTP).. but a in my new job (more than 1 year ago) I was given the Tosca Automation Tool.. I felt many times frustrated using this tools and not satisfied with my work . Unfortunately managers have ‘illusive’ expectations about automation testing..

    I would appreciate pls. that you continue posting automation testing hints to help us or guide us to be certain that we are performing automation in the most efficient way..

    Thanks you very much … HAVE HAPPY NEW YEAR!!
    Mercedes Moccia

  2. Thank you for the kind words, Mercedes. Happy New Year to you as well!

    B

  3. @halperinko - Kobi Halperin

    Nice post,
    I’m not sure I agree with “Short tests are best” declaration.

    Testing “Atomic’ issues one at a time is quite useful for new and unstable versions,
    But automation is meant for long term regression, assuming most abilities are working fine, then it is a waste to test each item apart – and longer stories touching many issues in as less as possible setup cycles and steps in general – may be found much more useful.
    i.e. – Running a single test case which takes 5 min and covers 30 areas, might be more useful then running 30 tests cases which takes 0.5 min each.

    It will also be closer to user working method etc.

    @halperinko – Kobi Halperin

    • Hi Kobi, Thanks for the comment.

      That automation is meant for long term regression is one of the biggest reasons to keep tests of a singular purpose (short). For example, if a test that covers 30 areas fails, how do you know what failed? You’d need to look at the log… but how long before you ascertain the problem? And what issues might be hidden by an additional failure later in the script?

      The main point is maintenance. Would the statement “singular (short) tests are easier to maintain than longer scenario tests” gather many arguments? How about a singular, 20 line function/method vs. a multi-purpose, 200 line function/method?

      I’m not discounting the value of a good, multi-function, user-based scenario test… but I would argue that scenario tests are more fragile, harder to maintain and should only be a modicum of your overall functional tests.

  4. Hi Brian,

    Great post. Good introductory tutorial, and good best practices.

    Do you have any thoughts / comments about best practices with setting up and tearing down the test case? You left out one really important best practice, which is the ability to repeat a test case any number of times. To do this requires that certain preconditions are established before the test case runs, and that the test case “cleans up after itself”. I find this to be the single most challenging aspect of automated testing, especially with applications of higher orders of complexity.

    A very happy new year to you.

    cheers,
    Tom

    • Hi Tom,

      Great point. Tests should most definitely be repeatable and setup/cleanup are indeed important topics. Worthy of their own post I’d say. I’ll add that one to the list!

      Thanks a bunch,
      B

  5. Hi Brian,

    I had a demo with Sahi the other day. I’ve reviewed several YouTube videos they have posted. I like what they have to offer for one very important piece: once I have automated all testing on the product, how hard will it be for anyone else to come in and run them or even tweak them later on? It is an education issue more than anything. I have done automation with other tools and this one seems very easy (compared to the others).

    So a Q for you: are there any instances you would use Sahi and not use Selenium for? And the reverse: are there any instances you would use Selenium and not use Sahi for?

    Thanks!

    Steve

    • Hi Steve,

      I think Sahi is a great tool, and there are instances when I would use it over Selenium. Most specifically, when the team is less developer-centric. Sahi’s built-in, javascript language, is very easy to learn, which makes it a great choice when working with QA teams that aren’t comfortable in a full-blown dev environment. Sahi rightly markets itself as being QA-friendly. I would also use it where speed is crucial; especially when the test lifespan might not be too long. You can write tests VERY quickly in Sahi.

      On the flipside, folks with more advanced developer skills, not shockingly, tend to dig Selenium more. Especially when they have a language preference. Additionally, Sahi doesn’t (last I checked) handle mobile testing.

      Ultimately, my customers are developers and qa, so I choose tools that I can trick them into using the most :)

      Hope that helps!

  6. Autotomate Thick Client testing

    Hi B,
    We have a thick client (for example windows based application) that we intend to have some automation testing in place for. Do you know if Sahi would support that?

    Thanks,
    Sneha

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.