Selenium-WebDriver vs Sahi

Jan 17, 2013

selenium.logoJari Bakken has a couple of nice example tests that show some of the differences between a Selenium-WebDriver test and a Watir-Webdriver test. I thought I would create another example to show how the same code could be written in Sahi.

Both of the following scripts perform the same test. The test itself is simple:

  1. Go to the Google Translate page
  2. Click the Detect Language button
  3. Select Norwegian as your language
  4. Log the button's text
  5. Enter the word "ost" into the text field
  6. Verify "cheese" is the returned translateion

Selenium-Webdriver Example...

require 'selenium-webdriver'

driver = Selenium::WebDriver.for :firefox
driver.get "http://translate.google.com/"

wait = Selenium::WebDriver::Wait.new(:timeout => 5)

language\_button = wait.until {
  element = driver.find\_element(:id => "gt-sl-gms")
  element if element.displayed?
}

language\_button.find\_element(:tag\_name => "div").click

menu = wait.until {
  element = driver.find\_element(:id => "gt-sl-gms-menu")
  element if element.displayed?
}

langs = menu.find\_elements(:class => "goog-menuitem")

norwegian = langs.find { |lang| lang.text == "Norwegian" }
norwegian.find\_element(:tag\_name => "div").click

puts language\_button.text

driver.find\_element(:id => "source").send\_keys("ost")

result = wait.until {
  result = driver.find\_element(:id => "result\_box").text
  result if result.length > 0
}

puts result
driver.quit

Sahi Example...

\_navigateTo("http://translate.google.com/");

\_click(\_div("gt-sl-gms"));
\_click(\_div("Norwegian"));

\_log("The selected button text is: " + \_getText(\_div("gt-sl-gms")));

\_setValue(\_textarea("source"), "ost");

\_assertEqual("cheese", \_getText(\_span("result\_box")));

The difference is pretty dramatic... at least in test size. Sahi handles all your waits for you so there's no need to clutter up your tests with wait code. Accessor code is also less verbose in Sahi. Overall, I would argue the Sahi code is much more readable but code beautry, like beauty in general, is in the eye of the beholder. I.E. your millage may vary...

Note: I used an assert instead of just logging the returned translateion... same dif

UPDATE: I've also added Geb to the fray!

Verify Sorting With Sahi (or any tool really)

Jan 7, 2013

4800819674_3cf963deaa_bI recently found a bug when sorting table columns, while running IE9. Ignoring the obvious fix, I wrote up the bug and then as I'm a fan of doing, I wrote a failing automated test to test it (Defect Driven Development!).

Testing sort proved a bit tricky... I thought I would share the results to perhaps save others same pain. I would also not be surprised to find a more elegant solution out there. If you have one, do feel free to share!

My example is in Sahi but it should be easy to transfer to your tool of choice. The gist is:

  1. Sort your column
  2. Collect all the elements in the column in an array
  3. Copy the array and sort the copy using javascript's sort()
  4. Compare the two arrays

And here's the Sahi code...

/\*\* ~sort test...     \*\*/

\_navigateTo("http://www.javascriptkit.com/script/script2/sorttable.shtml");

// initial sort...
\_click(\_link("Name"));

// collect table column cell values in an array...
\_set($numRows, \_table("table0").rows.length -1);
var $appSortedValues = new Array();
for(var $i=0; $i<$numRows; $i++) {
    $appSortedValues\[$i\] = \_getText(\_cell(0, \_in(\_row($i+1, \_in(\_table("table0"))))));
}

// copy array javascript style using slice...
var $jsSortedValues = $appSortedValues.slice(0);
$jsSortedValues.sort(caseInsensitiveSort);

\_assertEqual($jsSortedValues, $appSortedValues);

// Javascript's sort is case sensitive so we "fix" that thusly...
function caseInsensitiveSort(a, b) {
    if (a.toLowerCase() < b.toLowerCase()) return -1;
    if (a.toLowerCase() > b.toLowerCase()) return 1;
    return 0;
}

First off, thanks to JavascriptKit.com for providing an example for my example!

The script starts by navigating to JavascriptKit.com and clicking on the Name table header to get our initial sort of that column.

Then we collect each element in the Name column, in table0 and store them in an array, $appSortedValues. To iterate through our loop, we get the number of rows in table0 by counting the number of rows and subtract 1 for each table header.

Now we need a copy of our $appSortedValues array but in Javascript, you can't just set a new array from our existing array like so: $jsSortedValues = $appSortedValues; . This will actually create a reference to our original array; not what we want. Instead we use the slice() method to select elements 0 through the end of the array and put them all in our new array, thus copying it.

Finally, we use javascript's sort() method to sort the copy of our array, $jsSortedValues. But the sort() method has a little wrinkle; by default, it sorts alphabetically and is case sensitive. This is unlikely to be how your application's sort works... but luckily, you can roll your own sort filter by passing a function as an argument to the sort() method. In our case, we want it to be case-insensitive, hence our function caseInsensitiveSort.

Now we have two arrays and can simply use Sahi's _assertEqual method to verify both arrays are sorted in the same order.

That's it!

Test Automation By Example With Sahi

Dec 22, 2012

Matrix-CodeI created the following example test script to illustrate some practices I've found useful when writing test automation. This example is written in Sahi but these practices are tool-agnostic and could be transfered to any tool (though Sahi is a great one!). For our example, we'll be using MyTrashMail.com--a fine testing resource--as the SUT (system under test). Our goal is simply to confirm that an email can be deleted from the system.

The Test

// 0001.delete.random.email.from.indbox.and.confirm.deletion.sah

_include("functions.sah");
var $myTrashMailAccountName = "trash";

_navigateTo("http://www.mytrashmail.com");
checkEmailAccount($myTrashMailAccountName);

var $emailURL = clickRandomEmailLink();
deleteEmail();

_navigateTo($emailURL);

_assertExists(_span(/This message has been deleted.\*/));

Size Matters!

The first thing you may notice is the size of the test... it's quite small and size does matter! Barring end-to-end or scenario tests, small tests are best. They should be as singular and short as possible. 20 lines of code or less is a good rule-of-thumb, as is a single assert in each test. Long tests and multiple asserts are signs that you might be testing more than one thing. Short, concise tests are easier to read and maintain, and lend themselves to be run in parallel (multiple tests running at once).

Filenames as documentation

You may have glanced right over another practice I find invaluable... the script filename (which I've placed in the header comment): "0001.delete.random.email.from.indbox.and.confirm.deletion.sah". I'm a big fan of using code for documentation and that definitely includes filenames. Test script filenames (or class names), like bug titles, help to concisely communicate what is being tested. In this way, a list of filenames from your test suite could double as an impromptu test plan/test script.

Having an ID in the filename is also very helpful. This ID could be the story card ID, bug ID or just an incremental ID. This will allow you to group like files together, aid in searches, and provide a cross-reference from the test to associated story/bug should you need it (and you probably will).

Script Walkthrough

Now let's walk through the script a bit... The first two lines just include the script's associated functions (more on this in a minute) and sets the MyTrashMail account name we'll be using.

The next line uses Sahi's built-in javascript method _navigateTo to navigate to our system under test, MyTrashMail.com. Sometimes there's no need to reinvent the wheel as Sahi's built-in methods are quite good.

Abstraction Via Functions

Next, instead of using Sahi code to enter our email account name, we break it out into a function for a couple of reasons. First, the function name checkEmailAccount() is more descriptive than the Sahi code and aids readability, and second, it can help maintainability. For example, should we use this function in multiple scripts, and should the application code change, we need only update the function to fix multiple tests. Both of these improvements could likely see further enhancement if we also incorporated page objects... but that's a topic for another day.

Now we're into the heart of the test... we need to click into an existing email and here again we'll break this into a function which will greatly help to streamline the code. This could be handled a number of ways but strategically adding a little randomness into tests can expand coverage, and enhance the possibility of the script finding a bug. Thus, we create clickRandomEmailLink() to do the dirty work. As an added bonus, the function also provides some error handling and returns the random email's url for use later in the script!

Functions

//  functions.sah

// click a random email link and return it's URL...
function clickRandomEmailLink() {
    var $numDisplayedEmails = _count("_image", "messagestatus0.gif");

    if($numDisplayedEmails > 0) {
        var $randNum = _random($numDisplayedEmails-1); // zero base...

        _click(_link($randNum, _under(_cell("Subject", _in(_table("Table1"))))));

        // use _set to get and set a var with value from the browser...
        _set($currentURL, location.href);

        return $currentURL;

    } else {
        _logExceptionAsFailure("No emails found...");
    }
}

function checkEmailAccount($accountName) {
    _setValue(_textbox(0), $accountName);
    _click(_submit("Get Email"));
}

function deleteEmail() {
    _click(_submit("Delete Me"));
}

As you can see, clickRandomEmailLink() gets the number of emails in the account by counting the number of little mail icons (messagestatus0.gif (thanks MyTrashMail devs!)). If there is at least 1 email, we then pick a random number between 0 and the $numDisplayedEmails and use that to select the Nth link under the Subject column (_cell("Subject")), in the Table1 table (_table("Table1")).

We also return the browser's current URL... more on that in a bit.

If there are no emails in the account (and the test currently expects at least one to exist), the script fails and logs the failure via Sahi's method _logExceptionAsFailure().

With me?

Found Data

It's important to note that we're not creating an email in this script; we don't know (and don't need to know) anything about the email. We're boldly jumping in and finding what's there in the system, and testing it.

In this example, we're using a popular account name "trash" that seems to have a continuous number of emails coming in. In other "normal" instances, we would have another script to test email creation elsewhere (and likely more than one). These tests would then feed off of each other... one creating and one deleting.

Final Stretch

Finally, our work pays off... we delete the email. We could have just left the Sahi code in the script but deleteEmail() reads a bit better...

And now that we've successfully deleted the email, we use the $emailURL that was returned by clickRandomEmailLink() to return us to the scene of the crime... so to speak. We navigate back to the email we deleted to assert that it's actually been deleted. I use this practice a lot when jumping back and forth in the app. It ensures that we're testing the right element. Eg. you could also have returned the email subject but there could be multiple emails with the same subject. Plus, in the log, we'll have a direct link to the email should we need to investigate.

And lastly, we come to the assertion itself. As I mentioned, I'm a fan of one assertion per test. In addition, I like to put them on their own line separated by white space to aid in scanning for them. In our case we simply assert that the delete message is displayed and in doing so, make use of Shahi's ability to use regular expressions (an absolutely fantastic feature).

Summary

Even with such a simple test, there's a lot of good insights to observe. Here again are some of the key points...

  • Short tests are best. Keep them singular and simple
  • Use naming conventions that aid readability and can double as documentation
  • Abstract large blocks of code into functions
  • Use strategic randomness to expand coverage
  • Make use of the data available in the system
  • One assertion per test

You can copy and paste the code examples above or download them here.

Should You Just Stop Tracking Bugs?

Dec 6, 2012

With apologies to Ian Betteridge for the hyperbolic headline, I wanted to share this 6 minute lightning talk in which Jon Tørresdal argues for “The simplest solution to bug tracking: don’t!”.

To paraphrase Jon's list of 5 "crazy ideas":

  1. Don't track bugs; just fix them
  2. Delete all bugs in your backlog that you can't fix immediately
  3. All newly reported bugs are either rejected or fixed immediately
  4. Automated tests are created for each new bug
  5. Set WIP limit for bugs (eg. 20 total)

Ten years ago I would have scoffed at the idea of not tracking bugs and deleting the bug backlog, but today, I can see this as a realistic possibility. In fact, this isn't far from how my current team operates today. So what's stopping me from going all in? Guts?

Gojko Adzic "Reinventing software quality"

Dec 1, 2012

Here's a very interesting presentation from Gojko Adzic. Entitled "Reinventing software quality", Gojko builds a parallel between Abraham Maslow’s hierarchy of needs and software quality.

Fellow tester/blogger Augusto has a nice first-hand recap of this presentation as well...

Test Automation Roadmap: The 5 Ts

Nov 22, 2012

Where are you headed with your test automation efforts? Like all journeys into the unknown, a map can prove to be especially handy.

I think of test automation as being broken down into 5 goal groupings, that themselves have 1 über goal. Thus, I present my test automation roadmap or what I call The 5 Ts...

Time

Don't find it... make it! Developing test automation takes time (I just rolled my eyes at myself as I typed that). Of course this is painfully obvious to you and absolutely everyone else... but still, time is always an issue when developing test automation. you will always struggle to find time to write new tests, design better tests, maintain existing tests and refactor tests when things change.

But you can't find time for automation... you have to make time for it. Factor it right into your testing estimates and/or block off specific time for automation. The amount of time, or a lack thereof, allocated to automation is a great indicator to how committed your team/stakeholders are to automation and will be directly related to it's success.

Tools

The right tool for the right job... There is a veritable plethora of test automation tools available today and there are even more opinions on which is best. Picking the best tools depends heavily on the context of your project and the skills of your team. Choose wisely and try before you buy! Give your tool candidates a spin for an iteration (week or two) and see how they perform in the field.

Tests

Not just tests... great tests! Start writing tests! Write them iteratively, refactor regularly and fix failing tests quickly (keep 'em green!). Value working tests like you would working software. Start small (eg. smoke tests) and expand to improve coverage. Consider continuously adding to your framework rather than striving for the perfect framework up front. Aim for beautiful tests that are concise, easy to read and will be easy to maintain.

Transparency

Show your cards... Get your automated tests in front of your team/stakeholders/management and solicit their input. Testing should be a group activity... get the group involved! Share your testplans with your team; set up regular test-code reviews; pair-program. Celebrate your milestones and accomplishments! Schedule tests to run often and post results for all to see. Fast, consistant feedback will improve your tests, help manage expectations and show a return on your automation investment.

Trust

Trust me... The ultimate goal of test automation, and the destination on this roadmap, is trust. Without trust, test automation has no value. As the oracle for your SUT (system under test), you, your stakeholders, and your team, must be able to trust the answers it gives. Such trust takes considerable time and effort to build and is derived by successes with the previous 4 Ts. With enough time, the right tools, great tests and a transparent effort, trust in your automation will grow. Trust me!

Improving Testing Time By 288,000%

Sep 18, 2012

Before I explain the hyperbolic title of this post, a short interlude...

Summer is over. Long live Fall! Along with all the things that keep normal folks busy in the summer, I also play music and on that front, I've been more busy than usual. There were a lot of gigs and a debut album release, not to mention the practice time required for the aforementioned endeavors. All of this and a long-ish project at work, conspired against writing about QA. But because of said long-ish project, I have lots of fodder for the blog.

Anyway, I thought I would ease my way back to the blog with this post...

At work we have a basically-sunset product. And like many basically-sunset products, there is no team associated with it. And when you have a basically-sunset product with no associated team... over time, you end up with teams without a deep knowledge of the product. But also like most basically-sunset products, it does require a modicum of testing when things around it change.

This is where our hero comes in. Not me, no... my boss, actually! Because instead of making me test this basically-sunset product, he volunteered (admittedly after a bit of complaining by me) to test it himself. This actually worked out pretty well because he's one of the last people with the company that knows the product and it's history well. The downside is time. Because he's extremely busy with his own work, it takes him 4+ days to find time to actually do the testing. Still, this is how it went for over a year...

Long story only slightly longer, this past iteration planning meeting, it was time to once again strap on the test-bag (not sure that metaphor works but let's go with it anyway) and test the basically-sunset product. But because I always feel bad seeing my boss struggle to find time to test something that he was nice enough to not make me test, and since I had a few open days in the iteration, we decided to have me spend the time automating his tests. So working from my boss' testplan, along with input from the rest of the team, I was able to create a smoke test comprised of 20-odd (and counting) automated scripts that performed the same amount of testing in under two minutes (special thanks to Sahi's ability to thread tests).

Thus, what used to take 4+ days to test now takes under 2 minutes... which by my count (Google my math) comes roughly to a 288,000% improvement :)

Why did we wait a year to do this again?