So this happened today… #lifeonthearroyo
I started a new job at the beginning of the year which has contributed to my lack of blogging, but is also the impetus for exploring some new automation frameworks. I’d settled into using Protractor at my last job, but was interested to take a peek at any new tools that had popped up in the last few years… which is how I found Testcafe, a great, open source, E2E test framework.
Take a peek at my github account, you’ll see that I often port a handful of dorky, example tests over to various test frameworks to get a sense for them. And now you’ll find my port to Testcafe there as well. Porting these tests gives me a real sense of what makes a framework tick; the good and the bad.
Testcafe has a number of interesting features but the one that immediately caught my eye was implicit waits (ie. the framework handles waiting for page/element loading). For anyone who has written their own explicit waits (in which waiting for things is your problem), implicit wait would likely be very compelling! I feel like at some point, the industry decided that implicit waits were bad… I disagreed with that then and I disagree with it now. Implicit waits save a TON of time and assuming the framework offers a reasonable way to handle negative cases (eg. that an element does not exist), we’re all good. YMMV…
Another interesting feature is it does not use Webdriver. Probably like most people, I have a love-hate relationship with Webdriver: love what it can do; hate the bugs/inconsistencies in the various browser implementations. Testcafe (similar to Sahi) uses a proxy to inject test code into the browser. Personally, I don’t care how a tool makes the sausage… I just care that it does work and [SPOILER:], it works!
Of course Testcafe also hits on a number of goodies:
- It’s open source
- Parallel test runs
- Support for all the major cloud browser services
- Page object support
But it was when I was porting my example testcases to Tescafe, that I found the best feature of all… the community. Simply Googling for testcafe [shplah] almost every time returned pertinent results for the questions I had. They have great documentation, an active community forum, and a fine showing on stackoverflow.
So in my new position, I spiked out a few small projects in Testcafe and another leading Webdriver framework and posed it to my team to choose. Testcafe won out!
My experiences with Python have always been amicable (if not brief). It’s for just this reason that I’ve always wanted to try out Selenium Webdriver’s Python bindings. **SPOILER**: I did just that and you are now reading about it!
When learning a new language (or tool or job or…), I try and keep my opinions to myself. It’s funny how often something that seems weird/silly/stupid at first, will eventually have a reasonable explanation (except you, PYTHONPATH… I still think you’re pretty silly).
Pytest is a good example of this. Where other bindings generally offer a number of helper-frameworks that smooth out the rough bits, Python folks seem to embrace the vanilla bindings. This puzzled me a bit… until I discovered Pytest.
Pytest is a bit of a Swiss Army knife for Selenium testing in Python. It’s a test runner; uses fixtures to handle setup/teardown (and a ton more); handles test discovery; has detailed reporting; makes excuses for unwanted lipstick on your collar. It does most of the heavy lifting for tests.
Ultimately, I found Python’s concise syntax and explicit code conventions make it a great language for functional testing. I’ll cover more details in upcoming blog posts.
The primary goal of automated tests is to find failures as quickly as possible. The general idea being that bugs found earlier in development are cheaper to fix. This is generally true, but automation isn’t the only way to fail fast. Here are a few tips for failing faster that have worked well for me, and may work for you too!
Get QA involved in planning
During planning, QA should be building a mental (or even physical) test plan. This is the perfect time to start asking questions about how to test this new feature. Perhaps you’ll need a special resource, like a third-party tool that you’ll use to aid testing, or maybe your tests need to run on an isolated server. Identifying these needs during planning can give you the time you’ll need to acquire them, and get them in place for testing.
This is also a great time to aid testing by baking things right into the app. For example, having a parameter flag in an URL, or an a/b switch, can make a feature much more test-friendly, and impact the speed of your testing effort.
The importance of meaningful code review cannot be overstated. Whether you’re pair programming or reviewing code before it’s merged, this is a great time to not just find bugs, but to prevent future bugs by gaining a shared understanding of the code, removing complexity and ambiguity, and ensuring code standards are being followed.
Write e2e tests while code is in active development
The best time to write e2e tests is while the dev is actively developing the feature itself. This can be done in a TDD fashion, whereby you create your tests and page objects, have them fail, and get them to pass as the feature is completed. This is also the perfect time to quickly find CSS bugs, and/or add CSS tags to make automating easier. I mean, who doesn’t love a solid ID to grab onto?
Additionally, writing and committing e2e tests directly in the app feature branch can help keep the app code and test code organized until they are both merged… together! It’s also great to have the dev writing the feature, review the e2e tests. Who better to review the tests than the dev that wrote the code! This has the added benefit of keeping devs acquainted with the e2e code.
These short meetings are held after the app code is reviewed, but just before moving a story to QA. In this meeting, a dev visually runs through the feature for QA, showing off the feature and answering questions. The primary goal for this meeting is to ensure a shared understanding of the feature, including any changes since it was planned, and any testing tips the dev derived during coding. It’s also a great time to make sure your automated tests (unit/integration/e2e) are up to snuff.
Sluff off on this process at your peril; you WILL regularly find bugs during it. I promise.
A reader gave me flak for writing about automation way more than about testing. Okay. Guilty. So towards making amends for this fact, I give you, Brine’s guide to writing a good bug.
A good bug includes four things:
How to write a good bug:
1. Write the title last
A title is the most important part of a bug. A good title should convey all (or as much of) the information about the issue… to the point where reading the rest of the bug is unnecessary. And it should do so in as few words as possible.
I used to try to nail the title before writing anything else, but I found it just takes too much time. And I would usually end up rewriting it before submitting anyway. Write it last! I’ve found titles are much easier to write after you’ve written the bug itself.
2. Write concise steps to reproduce the issue
Steps should have just enough detail. Too little and it won’t be reproducible, too much and it can get confusing. If you find yourself writing a bug that has 10+ steps, you either have a very complex bug, or overly complex steps. This can also depend greatly on the bug’s audience and the preferences of the team. Still, keep it concise.
3. Describe the results and the expected results
This is the easiest part. After following your steps, what happened, and what did you expect to happen?
4. Define the impact of the issue:
You might think the impact of your bug is “high” (and what QA wouldn’t), but that term isn’t very useful, and certainly wouldn’t rate a whole section in a bug! No. The Impact section is where you try to PROVE the impact of a bug. This helps the writer because attempting to prove the impact of a bug requires proper regression. It also shows the reader what you’ve tried, and provides insight into the severity of the issue, and areas to verify upon being fixed.
- Is there data loss?
- Is there a workaround?
- Does it occur on multiple environments?
- Does it occur in Production
- Does it occur on multiple browsers/apps/OSs?
- Did it occur in the last build?
These questions show impact. They tell the reader just how bad, how far reaching, and/or how recoverable, the issue is.
Oh, and there’s one more thing…
4.5 Attach a screenshot
If the issue is cosmetic, it had better have a screenshot attached! Screenshots and/or movies can be worth the entirety of that classic cliché, a thousand words…