Do you know, for sure, absolutely, that your automated test is testing what you think it’s testing? Probably not, now that we’ve introduced some doubt. Mike Kelly explains how to get from “I think it’s working” to “Yes, it’s working correctly.”
Do you know, for sure, absolutely, that your automated test is testing what you think it’s testing? Probably not, now that we’ve introduced some doubt. Mike Kelly explains how to get from “I think it’s working” to “Yes, it’s working correctly.”
I’ve been using automated testing tools off and on in the Unisys mainframe environment for the past 15 years, mainly a tool called TTS1100 for the recording, scripting, and playback of text-based online transactions screens.
Very nice for regression testing, and quite easy to compare one test run from another — just use a text editor to strip off the timestamps and use a text-file compare utility to see what might be different.
I’m looking to use it a lot more serious at my new workplace, since my manager and I have almost total control over our local development environment. ๐
I’ve my own developed Perl/Expect based automation tool. It has multi threading, user defined comparisons,[String,File,screen values] entirly takes XML based configurations! Also you can loop the test cases, run in sequence, paralle, random! Entirly xml based! Mainly i use to for my embded testing. Remote machines, boards i can easaly test!
More and more I also generate html based reports and email the summary to important people apart from entire log with time stamp of every action i perform! Cool isnt it?
I didn’t expected there a whole lot of comments when it comes to testing as no one really likes to test. I am an automated test programmer.
We’ve set up a testing framework around the Mercury WinRunner tool mainly for GUI testing. So far we’ve had tremendous success automating over 400 fairly complex tests. The best part about it is that we didn’t code all of the tests. We created a keyword based language that allows a user of little programming knowledge to write an automated test themselves in excel spreadsheets. We also have a seperate spreadsheet where they specify the data to be used in the test so none of the data is actually placed within the “code”. This allows you to change the expected results and other data easily.
For times where we need to automated processes that have requirements other than GUI testing, we use PHP, Perl, and VB.
Our company size prevented us from using Mercury WinRunner, which necessitated me to find more “creative” solutions for automated GUI (and other) testing (including your aforementioned VB and Perl).
“I didn’t expected there a whole lot of comments when it comes to testing as no one really likes to test. I am an automated test programmer”
I agree whole heartedly. As a QA Engineer, I constantly have to advocate testing-sometimes it becomes tiring, as I feel like I have to not only justify why the tests are necessary; Other times I have to fight with the very creators of the software itself-the software engineers! The mere mention of the phrase seems to offend them, as if I’m threatening their child (I suppose, in a way I am)…
I challenge the automated testers to test some of the code I have tested
Weeping, penitence, long walks on bare feet over cold rocks with sharp edges.
You can do a lot with automated testing, but for some testing you need to look at the end result the user is supposed to see. When the code is written well, that should not be a problem.
When the code is written well…
It can be a great tool, but some things you need a human for to see how absolutely bonkers your code ‘that checks out ok’ looks when it’s actually running.
Happily, humans are no longer worth anything in the workforce and you can now pluck a 10 year old Indian goat herder off the dust plains and have him test software for you [absolutely no offense intended towards 10 year olds, Indians or goat herders, in whatever combination or nationality you care to mention].
The big problem with software testing is that the engineers don’t want to hear the bad news. And management doesn’t want to bear the cost. Of course, when the spaghetti hits the fan and the finished product is with the customer and THEN you need to fix the stuff…
but you’re never going to sell that to senior management. A tester sits in a warm office all day, doing nothing and when they open their mouth it’s to mention bad news.
“You can do a lot with automated testing, but for some testing you need to look at the end result the user is supposed to see. When the code is written well, that should not be a problem.
When the code is written well…”
You’re assuming I tested poorly written code, when in fact, the code was quite solid….
“…you need a human for to see how absolutely bonkers your code ‘that checks out ok’ looks when it’s actually running. ”
Although I agree that no good tester relies on white box techniques only the article was talking about the effectiveness of automated testing ๐
“The big problem with software testing is that the engineers don’t want to hear the bad news. And management doesn’t want to bear the cost. Of course, when the spaghetti hits the fan and the finished product is with the customer and THEN you need to fix the stuff…”
You forgot to mention that the test engineer usually bears the brunt of the blame since “it was your job to test that product anways…” ARGH!!!