New Record (Finally)

Jul 9, 2020

Well... this took a while but I finally had time (THANKS PANDEMIC) to mix and release a record I made with good friends Starr Moss & Ben Sanders.


Nov 11, 2019

So this happened today... #lifeonthearroyo


Sep 11, 2019

Soooo... I live in NM now. Here's a picture of our resident roadrunner.

E2E Testing with Testcafe

Jun 17, 2019

I started a new job at the beginning of the year which has contributed to my lack of blogging, but is also the impetus for exploring some new automation frameworks. I'd settled into using Protractor at my last job, but was interested to take a peek at any new tools that had popped up in the last few years... which is how I found Testcafe, a great, open source, E2E test framework.

Take a peek at my github account, you'll see that I often port a handful of dorky, example tests over to various test frameworks to get a sense for them. And now you'll find my port to Testcafe there as well. Porting these tests gives me a real sense of what makes a framework tick; the good and the bad.

Testcafe has a number of interesting features but the one that immediately caught my eye was implicit waits (ie. the framework handles waiting for page/element loading). For anyone who has written their own explicit waits (in which waiting for things is your problem), implicit wait would likely be very compelling! I feel like at some point, the industry decided that implicit waits were bad... I disagreed with that then and I disagree with it now. Implicit waits save a TON of time and assuming the framework offers a reasonable way to handle negative cases (eg. that an element does not exist), we're all good. YMMV...

Another interesting feature is it does not use Webdriver. Probably like most people, I have a love-hate relationship with Webdriver: love what it can do; hate the bugs/inconsistencies in the various browser implementations. Testcafe (similar to Sahi) uses a proxy to inject test code into the browser. Personally, I don't care how a tool makes the sausage... I just care that it does work and [SPOILER:], it works!

Of course Testcafe also hits on a number of goodies:

  • It's open source
  • Tests are written in Javascript (es6)
  • Parallel test runs
  • Support for all the major cloud browser services
  • Page object support

But it was when I was porting my example testcases to Tescafe, that I found the best feature of all... the community. Simply Googling for testcafe [shplah] almost every time returned pertinent results for the questions I had. They have great documentation, an active community forum, and a fine showing on stackoverflow.

So in my new position, I spiked out a few small projects in Testcafe and another leading Webdriver framework and posed it to my team to choose. Testcafe won out!

Selenium Testing in Python and Pytest

Nov 6, 2018

My experiences with Python have always been amicable (if not brief). It's for just this reason that I've always wanted to try out Selenium Webdriver's Python bindings. **SPOILER**: I did just that and you are now reading about it!

When learning a new language (or tool or job or...), I try and keep my opinions to myself. It's funny how often something that seems weird/silly/stupid at first, will eventually have a reasonable explanation (except you, PYTHONPATH... I still think you're pretty silly).

Pytest is a good example of this. Where other bindings generally offer a number of helper-frameworks that smooth out the rough bits, Python folks seem to embrace the vanilla bindings. This puzzled me a bit... until I discovered Pytest.

Pytest is a bit of a Swiss Army knife for Selenium testing in Python. It's a test runner; uses fixtures to handle setup/teardown (and a ton more); handles test discovery; has detailed reporting; makes excuses for unwanted lipstick on your collar. It does most of the heavy lifting for tests.

Ultimately, I found Python's concise syntax and explicit code conventions make it a great language for functional testing. I'll cover more details in upcoming blog posts.

Take a peek at the code up on GitHub...

Fail Faster

Sep 23, 2018

The primary goal of automated tests is to find failures as quickly as possible. The general idea being that bugs found earlier in development are cheaper to fix. This is generally true, but automation isn't the only way to fail fast. Here are a few tips for failing faster that have worked well for me, and may work for you too!

Get QA involved in planning

During planning, QA should be building a mental (or even physical) test plan. This is the perfect time to start asking questions about how to test this new feature. Perhaps you'll need a special resource, like a third-party tool that you'll use to aid testing, or maybe your tests need to run on an isolated server. Identifying these needs during planning can give you the time you'll need to acquire them, and get them in place for testing.

This is also a great time to aid testing by baking things right into the app. For example, having a parameter flag in an URL, or an a/b switch, can make a feature much more test-friendly, and impact the speed of your testing effort.

Code review

The importance of meaningful code review cannot be overstated. Whether you're pair programming or reviewing code before it's merged, this is a great time to not just find bugs, but to prevent future bugs by gaining a shared understanding of the code, removing complexity and ambiguity, and ensuring code standards are being followed.

Write e2e tests while code is in active development

The best time to write e2e tests is while the dev is actively developing the feature itself. This can be done in a TDD fashion, whereby you create your tests and page objects, have them fail, and get them to pass as the feature is completed. This is also the perfect time to quickly find CSS bugs, and/or add CSS tags to make automating easier. I mean, who doesn't love a solid ID to grab onto?

Additionally, writing and committing e2e tests directly in the app feature branch can help keep the app code and test code organized until they are both merged... together! It's also great to have the dev writing the feature, review the e2e tests. Who better to review the tests than the dev that wrote the code! This has the added benefit of keeping devs acquainted with the e2e code.

Handoffs/desk checks

These short meetings are held after the app code is reviewed, but just before moving a story to QA. In this meeting, a dev visually runs through the feature for QA, showing off the feature and answering questions. The primary goal for this meeting is to ensure a shared understanding of the feature, including any changes since it was planned, and any testing tips the dev derived during coding. It's also a great time to make sure your automated tests (unit/integration/e2e) are up to snuff.

Sluff off on this process at your peril; you WILL regularly find bugs during it. I promise.

Unit tests vs. Integration tests

May 20, 2018

I've had to explain all of the points MPJ makes in this video, many, many times. Now I can just send a link to this video. And while I tend to call out e2e tests as their own thing, deep down they're really integration tests.